So I’ve got my infrastructure Pi boards all in a dogbone rack, and I’ve got my Desktop system on a Pi Model 3 and find it good enough as a Daily Driver (still tuning the preferred ‘release’ to run, at the moment wandering between Arch, Debian, and Devuan as the mood strikes me or depending on where I’ve left a bit of something…)
So I figured maybe it’s time to set about using all these cores. (I’ve got ‘top’ running in three windows from 3 of the 4 Pi boards and seeing them all at 99% idle is, well, it offends my sense of waste-not want-not…)
I’ve made GIStemp ‘go’ before, so on the docket is “do it again” with the more recent version (AND finish the conversion to Little Endian in Step 4-5 block that I never finished… I got ‘distracted’ by the discovery that the ‘magic sauce’ data molestation was moving upstream into the GHCN, USHCN and all the little National HCNs from around the world… and spent a year or three doing ‘dig here!’ in that…) So that’s to be done “sometime”. But it takes nearly no CPU to run GIStemp. See:
https://chiefio.wordpress.com/2010/08/09/obsolescent-but-not-obsolete/
for a photo and description of the workstation I’d done the first port on. Pentium class AMD CPU at 400 MHz with 128 MB of memory. Yes, far less than the modern Raspberry Pi… But I’d already DONE a trial run at the port-to-the-Pi, and the old code complied fine:
https://chiefio.wordpress.com/2015/07/29/raspberry-pi-gistemp-it-compiled/
Raspberry Pi GIStemp – It Compiled
Posted on 29 July 2015 by E.M.SmithI’ve not done any testing yet, just a first cut “unpack and compile”.
The good news is that it did, in fact, compile. And STEP0 ran.
I had the same two “Makefile” issues from the port to Centos (Red Hat for the Enterprise). No surprise as it was an unpack of the same tar file archive with the same two issues in it. A bogus “:q” from a botched exit from the editor in one source file, and the need to remove a trace directive from the compile statement on another. After that, it compiled fine with one warning.
That warning might be a significant bug in the GIStemp code. I’ll need to investigate it further. I don’t remember if any of the other compilations found that type mis-match. The importance of it is still TBD and might be nothing.
Here’s the captures on what I did:
So re-doing that on a faster Pi with the newest code seemed a bit thin gruel…
What I really wanted was a GCM to play with. Get into the whole idea of what is it these things do. Maybe put in a “GOSUB Solar_Cycle” in place of CO2, tune a bit, and present an alternative computer fantasy to the AGW one.
The Model E code is the biggest GCM (Global Circulation Model) where I’ve found easy to get public domain source code. But it isn’t promising for a stack of 3 x Pi boards…
http://www.giss.nasa.gov/tools/modelE/
GISS GCM ModelE
The current incarnation of the GISS series of coupled atmosphere-ocean models is available here. Called ModelE, it provides the ability to simulate many different configurations of Earth System Models — including interactive atmospheric chemistry, aerosols, carbon cycle and other tracers, as well as the standard atmosphere, ocean, sea ice and land surface components.
[…]Download
Model versions used for the CMIP5 and CMIP6 simulations are available via the nightly snapshots of the current code repository, including the frozen ‘AR5_branch’. However, users should be aware that these snapshots are presented ‘as is’ and are not necessarily suitable for publication-quality experiments.
Please let us know if you intend to use this code by subscribing to our mailing list. We will then keep you (very occasionally) informed about code patches and possible improvements to the configuration.
Guidelines for Use of Model CodeOur code is freely available for download and use. Note however, that this is a work in progress and many people contribute to its development. It’s important to state how much the group effort — in all the different aspects of modeling — has contributed to progress. From the people who conceive of new developments, the people who code them, the people that apply the model to new science questions, the people who find the inevitable bugs, to those who do the data processing and analyse the diagnostics, all are essential to keeping the project viable. This should be reflected on when deciding on co-authorship and acknowledgments.
When you look down into the ‘configuration’ of computer desirable, you find:
https://simplex.giss.nasa.gov/gcm/doc/UserGuide/System_requirements.html
System requirements
The ModelE source code is quite portable and can be installed and used basically on any Unix machine with enough CPU power and memory (from Linux clusters to Linux and Mac OS X desktops). Though one can run the basic serial version of the model with prescribed ocean on a single core with as little as 2 GB of memory, to do any useful simulations in reasonable time one would need a computer with at least 16 cores (Sandy Bridge or faster) with at least 1 GB of memory per core. To do dynamic ocean simulations with full atmospheric chemistry one typically would need 88 cores with at least 1 GB of memory per core.
The source code is written mostly in Fortran 90 language with some elements of Fortran 2003 and can be compiled either with Intel ifort compiler (version 12.0) or with GNU gfortran (version 4.9 or later).
For input/output we use a NetCDF library, so it has to be installed (version 3.6 or later).
For parallel simulations on multiple cores the model needs to be compiled with MPI support, so an MPI library needs to be installed on your computer. The following MPI distributions are currently supported by the model:
OpenMPI
Intel
MPICH2
MVAPICH2
SCALI
mptFor desktops or small servers we would recommend OpenMPI, since it is the easiest one to install and configure, though MPICH2 also works without problems. On a cluster, typically it would be up to support group to make a decision on which MPI distribution is more suitable for a particular platform. Over the last few years we were using Intel MPI with great success.
The compilation process is based on GNU make command and uses perl scripting language and m4 preprocessor (GNU version). Typically these are present by default on any Linux or Mac OS X system, but if you are using other type on Unix you may need to install them.
If instead of latitude-longitude version of the model you want to work with cubed sphere version, then in addition to the requirements mentioned above you will need to install a compatible ESMF library. You will also need to obtain the source code for the cubed sphere dynamical core from the developers since it is not included in the standard modelE distribution.
As I’ve already gotten Fortran 90 and OpenMPI to run on the Pi, that’s not an issue. BUT, that 88 cores and 88 GB of memory is ‘an issue’ for a Pi Cluster. ASSUMING I’m willing to wait longer for results and that a background process running for a week or three is OK by me, not by them: That might let me use a Pi Model 3 with GB memory (lite on memory / core by 3 GB, so add swap…) and then it’s only 88/4 = 22 Pi Model 3 boards and the 6 Dogbone Cases to hold them… Er, a bit larger than my kit today… Even dropping back to the 16 cores and 16 GB is 4 cores more than I’ve got, and much much faster cores too… So I’m about a factor of 10 behind where I’d really need to be. Not where you want to make your first test run at a technology…
I could likely make their minimal run version “go”, and that’s where I’ll start whenever I return to Model E… but not now. Just below Model E on their page, you find a reference to an older model. Off to the side of that page at the NASA, one finds a link to it:
http://www.giss.nasa.gov/tools/modelii/
GISS GCM Model II
The Goddard Institute for Space Studies General Circulation Model II, described fully by Hansen et al. (1983), is a three-dimensional global climate model that numerically solves the physical conservation equations for energy, mass, momentum and moisture as well as the equation of state.
Hmmm… 1983 isn’t all that long ago. Clearly this model has been used for Global Warming calculations. Looks like a good place to start, to me. Yes, it’s about 30 years back, but it ought to contain the basics. It would also be more approachable from the POV of figuring out how their thinking evolved.
Model Description
The standard version of this model has a horizontal resolution of 8° latitude by 10° longitude, nine layers in the atmosphere extending to 10 mb, and two ground hydrology layers. The model accounts for both the seasonal and diurnal solar cycles in its temperature calculations.
Cloud particles, aerosols, and radiatively important trace gases (carbon dioxide, methane, nitrous oxides, and chlorofluorcarbons) are explicitly incorporated into the radiation scheme. Large-scale and convective cloud cover are predicted, and precipitation is generated whenever supersaturated conditions occur.
Snow depth is based on a balance between snowfall, melting and sublimation. The albedo of snow is a function of both depth and age. Fresh snow has an albedo of 0.85 and ages within 50 days to a lower limit of 0.5. The sea ice parameterization is thermodynamic with no relation to wind stress or ocean currents. Below -1.6°C ice of 0.5 m thickness forms over a fractional area of the grid box and henceforth grows horizontally as needed to maintain energy balance. Surface fluxes change the ocean water and sea ice temperature in proportion to the area of a grid cell they cover. Conductive cooling occurs at the ocean/ice interface, thickening the ice if the water temperature remains at -1.6°. Sea ice melts when the ocean warms to 0°C and the SST in a grid box remains at 0°C until all ice has melted in that cell. The albedo of sea ice (snow-free) is independent of thickness and is assigned a value of 0.55 in the visible and 0.3 in the near infrared, for a spectrally weighted value of 0.45.
Vegetation in the model plays a role in determining several surface and ground hydrology characteristics. Probably the most important of these is the surface albedo, which is divided into visible and near infrared components and is seasonally adjusted based on vegetation types. Furthermore, the assigned vegetation type determines the depth to which snow reflectivity can be masked. Hydrological characteristics of the soil are also based upon the prescribed vegetation types; the water holding capacity of the model’s two ground layers is determined by the vegetation type as is the ability of those layers to transfer water back to the atmosphere via transpiration. Nine different vegetation classes, developed by Matthews (1984) for the GISS GCM, represent major vegetation categories and the ecological/hydrological parameters which are calculated from the vegetation. Since the GISS GCM is a fractional grid model, more than one vegetation type can be assigned to each grid box.
Sea surface temperatures (SST) are either specified from climatological input files or may be calculated using model-derived surface energy fluxes and specified ocean heat transports. The ocean heat transports vary both seasonally and regionally, but are otherwise fixed, and do not adjust to forcing changes. This mixed-layer ocean model was developed for use with the GISS GCM and is often referred to as the “Q-flux” parameterization. Full details of the Q-flux scheme are described in Russell et al. (1985), and in appendix A of Hansen et al. (1997). In brief, the convergence (divergence) at each grid cell is calculated based on the heat storage capacity of the surface ocean and the vertical energy fluxes at the air/sea interface. The annual maximum mixed-layer depth, which varies by region and season, has a global, area-weighted value of 127 meters. Vertical fluxes are derived from specified SST control runs where the specified SSTs are from climatological observations and have geographically and seasonally changing values. In the early 1990s Russell’s technique was modified slightly to use five harmonics, instead of two, in defining the seasonally-varying energy flux and upper ocean energy storage. This change improved the accuracy of the approximations in regions of seasonal sea ice formation. The technique reproduces modern ocean heat transports that are similar to those obtained by observational methods (Miller et al. 1983). By deriving vertical fluxes and upper ocean heat storage from a run with appropriate paleogeography and using SSTs based on paleotemperaure proxies, q-fluxes it provides a more self-consistent method for obtaining ocean heat transports from paleoclimate scenarios that use altered ocean basin configurations.
Current Status
Present-day maintenance and some development of Model II is performed within the context of the Columbia University EdGCM project. See the links at right for source code downloads and other resources provided by that project. Historical versions of Model II (e.g., the computer code used in the 1988 simulation runs) are not currently available. Please address all inquiries about the EdGCM project and about implementing Model II on modern personal computers to Dr. Mark Chandler.
Persons interested in using the most recent version of the GISS climate model, a coupled atmosphere-ocean model, should see the ModelE homepage.
Gee… and it is STILL in use, though mostly in a teaching context.
I can live with that.
The “stuff on the right” says:
Downloads & Links
Model II Source Code
The 8°×10° (lat×lon) version of the GISS Model II is still in use as a research tool for paleoclimate and planetary studies, for very long simulations, or where limited computing resources are available. An up-to-date copy of this slightly modified version, with minor updates and bugfixes, is available from the EdGCM project.
EdGCM
The Columbia University EdGCM software is a graphical user interface which simplifies set-up and control of GISS Model II. This educational suite gives users the ability to create and conduct “Rediscovery Experiments”, simulations that reproduce many of the hundreds of experiments that have been conducted and published using this version of the NASA GISS GCM.
EdGCM Model II Forum
Persons attempting to compile the Model II FORTRAN source code may consult the EdGCM message boards for assistance from other model users.
Even has an active Forum for questions. This is not some antique dead code. This is an actively used teaching tool. Just what I’d like to have and where I’d be best served to start.
From their link:
EdGCM provides a research-grade Global Climate Model (GCM) with a user-friendly interface that can be run on a desktop computer. For the first time, students can explore the subject of climate change in the same way that actual research scientists do. In the process of using EdGCM, students will become knowledgeable about a topic that will surely affect their lives, and we will better prepare the next generation of scientists who will grapple with a myriad of complex climate issues.
Our goal is to improve the quality of teaching and learning of climate-change science through broader access to GCMs, and to provide appropriate technology and materials to help educators use these models effectively. With research-quality resources in place, linking classrooms to actual research projects is not only possible, but can also be beneficial to the education and research communities alike.
Just what I’m looking for. If it can run “on a desktop computer” it can run on a Pi (though perhaps more slowly…)
If even looks like, for the less adventuresome, you can just run it on your Mac or PC without the porting work:
http://edgcm.columbia.edu/software2/
Software
EdGCM, or the Educational Global Climate Model, is a suite of software that allows users to run a fully functional 3D global climate model (GCM) on laptops or desktop computers (Macs and Windows PCs). Teachers, students and others can learn-by-doing to design climate experiments, run computer simulations, post-process data, analyze output using scientific visualization tools, and report on their results. All of this is done in the same manner and with the same tools used by climate scientists.
Click here for details about the specific components of the EdGCM software
The Global Climate Model (GCM) at the core of EdGCM was developed at NASA’s Goddard Institute for Space Studies, and has been used by researchers to study climates of the past, present and future. EdGCM makes it possible for people to use the GCM without requiring programming skills or supercomputers. Major components of the software include:
A graphical user interface to manage all aspects of working with the GCM.
A searchable database that organizes experiments, input files, and output data.
Scientific visualization software for mapping, plotting and analyzing climate model output.
An eJournal utility to help create reports or instructional materials (including images).
Automated conversion of graphics and reports to html for web publishing.EdGCM also comes complete with 6 ready-to-run climate model experiments:
2 modern climate simulations
3 global warming simulations
1 ice age simulationand educators have great flexibility in constructing their own scenarios to satisfy specific curricular requirements. EdGCM scales for use at levels from middle school to graduate school, making it a unique tool for bringing a real research experience into the classroom.
Me? I want to compile this from scratch as my intent is to add some bits to it from the Solar and Planetary Cycles POV. Hey, we need a “computer model run” to bash the AGW folks with just like they have been pushing their stuff at us…
http://edgcm.columbia.edu/ModelII/Compile.html
Has a load of OS X Intel and OS X PPC and more. Also some PC bits. For Linux they want you to run it under wine:
OS X GNU Fortran
This platform is not officially supported. There is a Makefile.gfortran in the source directory which works at the time of this writing. You’ll need to edit RANVAX.f and setpath.f to remove some “_”‘s.
GNU Tools w/ CygWin (Unofficial compile)
See http://forums.edgcm.columbia.edu/ for comments and user suggestions on building with CygWin
Linux (Wine)
You cannot run EdGCM on Linux but you can run Model II. The following instructions are thanks to Patrick Lee
Create a run on Windows, suppose the name of the run is testrun
Run the simulation for the first hour and then stop the simulation. 3.Transfer the whole directory ..\EdGCM 4d\EdGCM 3.0\Output\testrun to Linux. You may delete the file GCM-testrun.exe which is the Lunar (the file testrun.exe is model II) and WhatToStart.TXT. If there is a file called SSW.STOPGCM, then you MUST delete it.
Use wine to run the testrun.exe on Linux (Notice that model is a command line program, so you should not be afraid when you see the black screen).
OK, so “some assembly required”, but it is known to be runnable under GNU Fortran and that’s my target language. As Fortran is highly portable, I’m thinking this likely is not all that hard to port, and these folks just love expensive Macs… “We’ll See” when the compile time comes…
So I downloaded it. And unpacked it.
[root@ARCH_pi_64 trunk]# ls -l /GCM total 304 drwxr-xr-x 3 root root 4096 Dec 26 23:20 GCM -rw-r--r-- 1 root root 303284 Dec 26 23:20 modelII_source.zip
OK, it’s a zip file of 303 KB. Not big, really. Unzipped with ‘unzip’ it makes a directory named “GCM”. Wandering down it, you get to:
[root@ARCH_pi_64 trunk]# pwd /GCM/GCM/modelII/trunk
(Note I named my working dir /GCM before I knew what it would do, so I have a double dip on GCM at the top of the path…)
[root@ARCH_pi_64 trunk]# du -ks . 1576 . [root@ARCH_pi_64 trunk]# ls B83XXDBL.COM Pjal0C9.f BA94jalC9.COM PostBuild.sh DB11pdC9.f R83ZAmacDBL.f FFT36macDBL.f RANVAX.f FORCINGSjalC9.f RANVAXxlf.f FORCINGSmac.COM README.f.in Info.plist RFRCmacDBL.f Makefile UTILmacDBL.f Makefile.Mac.PPC UTILwinDBL.f Makefile.README commandlinetest.gui Makefile.gfortran modelF.r Makefile.ifort mrweprefs.r Makefile.win pd_COMMON Makefile.win.gui setpath.f Mjal2cpdC9.f
So 1.5 MB of stuff once unpacked (and not including data files). I think I can live with that.
I suspect my biggest hurdles will be the GUI bits. That there is a MAKEFILE for gfortan is a Very Good Thing.
[root@ARCH_pi_64 trunk]# cat Makefile.gfortran F77COMPILER= gfortran LINKER = gfortran F77_FLAGS = -c -s -fconvert=big-endian -fno-automatic -ff2c -O2 # ifort -O2 -convert big_endian -IPF_fma -save -zero -ftz -assume # dummy_aliases -align none -mp -openmp -c L23_DAILY_MClim_CH4mths.f LIBS = -L/Developer/SDKs/MacOSX10.5.sdk/usr/lib TARGET= model.command SRCS = RANVAX.f \ setpath.f \ RFRCmacDBL.f \ UTILmacDBL.f \ Mjal2cpdC9.f \ Pjal0C9.f \ FORCINGSjalC9.f \ FFT36macDBL.f \ R83ZAmacDBL.f \ DB11pdC9.f \ README.f OBJS = $(SRCS:.f=.o) # all objects %.o: %.f $(F77COMPILER) -o $@ $(F77_FLAGS) $< $(TARGET): $(OBJS) $(LINKER) $(LPATHS) $(OBJS) $(LNK_FLAGS) $(LIBS) -o $(TARGET) clean: rm -f *.o rm -f $(TARGET) .PHONEY: all clean
Looks pretty straight forward and simple. I think I can work with that.
As I already have gfortran installed under Arch Linux, I’m doing my first whack at it here. Once that works, I’ll try moving the whole thing over to Devuan on the Model 2 (assuming things are fast enough…)
So that’s where I’m at and what I’m doing for the next day or three…
The description of the Input Files here:
http://edgcm.columbia.edu/ModelII/InputFiles.html
Gives an interesting POV of what is set at the start, and how many parameters you can play with (lots!).
You can download the source zip file from:
http://edgcm.columbia.edu/ModelII/modelII_source.zip
Then there’s a lot of documentation links on their top page for the model… So some “light reading” for the rest of the year… that thankfully is only a few more days ;-)
So that’s what I’m ‘up to’ for the next while. Seeing if I can make a Pi Port of this go, then seeing how hard it might be to splice on a Solar Cycle routine… As in all things software “We’ll see” ;-)
Hmmm interesting project – just a thought
If you were to specify the specific Pi board you wanted to use, and case (amazon order number for example) and sort of mention a UPS mail drop address, Santa might send you a few bits and pieces for an 88 core system as Santa’s elves from time to time might send your way via drop ship to that address.
Larry Ledwick says:
27 December 2016 at 1:18 am;”Santa’s elves from time to time might send your way ”
I am now ready to assist with that or donate through paypal.
Be careful about the things you might wish for!.
you might just get it…pg
‘Twere it me, I’d be very interested in seeing what happens when you “tweak” the ocean circulation timeframes as well as the solar – my own mental model is that the majority of climate “change” is driven by the “beat” between these two. Kinda like: slightly bigger solar = warming ocean surface, “sucked down” and a series of “stripes” in that circulation. As each “stripe” re-surfaces, it obviously affects the starting point for ocean surface temps. A complex “dance”, also potentially modulated by circulation variations due to lunar tidal cycle too – so maybe not always the same length-of-cycle. Call it (wild-ass guess) 700 years ocean cycle, 1100 year solar and 60 year lunar. That makes for a fairly complex waveform when viewed over 10K+ years.
Of course you have your own ideas, just throwing stuff out there – mayhap is Santa is nice re: Pi’s, I will get my Xmas wish too ;-)
Hi Chief. Greetings from the Big Mango (BKK).
Running the GC am on a desktop sounds like it will be a ton of fun. For the one week to 2 week run times you may want to consider battery float or uninterruptible power supply haha.
I hope you can add in the solar cycle and that other thing. It would be way cool if you could simulate the little Ice Age, eh.
I noticed the following statement regarding the ocean simulation Theory, this may require some tweaking if it doesn’t give you desired result.
The ocean heat transports vary both seasonally and regionally, but are otherwise fixed, and do not adjust to forcing changes. This mixed-layer ocean model was developed for use with the GISS GCM and is often referred to as the “Q-flux” parameterization. Full details of the Q-flux scheme are described in Russell et al. (1985), and in appendix A of Hansen et al. (1997). In brief, the
x
Regards, Pearce M. Schaudies.
Minister of Future
@Elves:
Well, first I need to make this little one go… then assess the difficulty of reaching the goal SW config… then choose ‘best hardware’… (Basically time out the speed of an ARM vs Intel arch chip … i.e. one run on the Pi one on the Laptop… and x cost / mip…)
I also need to look at the loop structure in the code. IFF it is amenable to GPU handling, then it is most likely a single $150 or so NVIDIA kit would do it.
So a while to go before I can assure any Elves that they are actually doing anything other than making my office look kool ;-)
But I’m banking that offer “for that day”… (from 3 days fo the Condor…)
@kneee63:
Yup, lots of things you can do with the basic running system “just to see”. One I’m hoping to try is shut off CO2, then compare the result to the solar cycle long cycles. Is there an “inverse match” indicating they need to be added in?…
@Pearce:
Power here is pretty stable (once we pitched out Gov. Grey “out” Davis and his playing with the electric grid…) so when I shut down the Pi Model 1 to put it in the dogbone case, it had been running a couple of months (basically since I’d last put Alpine on it and put it into production). Prior to that, the Debian version had near a year uptime on it IIRC.
BUT, if it comes time for really long runs of more than that, I’ve got a killowatt UPS that just needs a battery, and a new battery just sitting in a car from Florida with nothing else to do, so I can make a kW UPS without about a 4 week run time, that float charges when there is power; and all in about 2 work hours…
@All:
Some sizing:
( wc -l gives number of lines in a file)
So only 4 big modules, really. 7 different Makefiles (only one of which is likely needed by me). Some COMMON blocks (the things with .COM endings) for passing data between FORTRAN programs. Then 8 small FORTRAN programs, one .sh shell script, and whatever .gui is. Oh, and the README file. Looks pretty easy. Though 6000 lines is about 100 pages… I hope a lot of it is comments and whitespace ;-) At 50 lines / page, that 24,000 lines is 580 pages, or about 1000 pages on a 24 line terminal screen window. Small for a Russian Novel ;-)
In the gfortran Makefile, this is the most bothersome item:
LIBS = -L/Developer/SDKs/MacOSX10.5.sdk/usr/lib
I suppose that’s something to do with the gui stuff… but needs checking. SDK is Software Development Kit typically, so maybe not.
In the .win version it has:
DFLT_LIBS= unix.lib vms.lib absRT0.lib kernel32.lib fio.lib fmath.lib comdlg32.lib f90math.lib libac.lib
The mac PPC (PowerPC) version has:
LIBS = -lU77 -lV77 -L”$(ABSOFT)/lib” -lf90math -lfio -lf77math -lm
so maybe it’s just the needed FORTRAN libraries, in which case I’m likely home free as Linux has all that pretty much built in.
A brute force trial compile (just to surface errors out the gate) using
mostly gave warnings, but also some errors. It looks related to the ‘advice’ on one of the pages that one of the equipment choices would require removing some underscores… Once one fails to compile, the others may fail due to it not existing yet…
This is a long list of output, but useful to document what happens with a straight compile and no attempt at making things match the system:
None of it looks particularly horrible to me, just takes time to go through, one at a time, fixing what it doesn’t like.
I think this bit is the key to a lot of those errors:
So that’s where I’ll start on the debugging…. That, and making my own Makefile.Pi to drive it…
The top level Makefile looks to drive the others, so just adding my own (based on the gfortran one) and a bit of ‘how to sense my computer type’ will likely do it.
So where it says do a Makefile.ifort or Makefile.Mac.PPC I’ll need to do something else…
Well, time to get back to looking at programs ;-)
Interesting… the two .r files…
So the Window Environment is likely going to be an issue if I want the GUI stuff. I’ll cross that when I must. I doubt the window interface is required by the base model.
Similarly:
One hopes there’s some easy way to get the same functions in X land…
The one .sh file is trivial. A “one line script”:
So, with that, I’m down to the dozen FORTRAN program files and the 4 COMMON block bits of FORTRAN in files.
OK, gui issues differed, Makefile issues easily tractable, the “libs” issues likely none by default.
Time to dive into the FORTRAN files…
Starting with the smallest ones and working up… This is ‘setpath.f’ and is 36 lines of code…
As it is a workaround for a Windoz bug / issue, likely can be coded out of a Pi version (or left in if it is a null impact).
Next!
RANVAX.f is also short:
IBM 360? Really? Man that bit of code has been kicking around a long time…
Copy it to a new name and edit out the suspect part:
[root@ARCH_pi_64 trunk]# cp RANVAX.f ranvax.f
[root@ARCH_pi_64 trunk]# vi ranvax.f
I suspect the “issue” is that RANDU=ran_(IX) where the _ is likely wrong… taking out the underscore:
Oh Dear. Got past the ran_ error but now doesn’t like something in the gnueabihf library function _start and still fines an undefined reference to ‘main’
I think I may need to show that the gfortran on this chip image works right before I go forward…
Off to compile / test a couple of vanilla FORTRAN programs known to work, then come back to this and figure out ‘wha?’… Likely to be a while… as it’s getting late here and sleep will claim me soon ;-)
Well, my parallel FORTRAN tests compile and run, but this doesn’t. Using:
gfortran -static-libgfortran setpath.f
gives:
Which I think says it got through the compile phase and into the link / loader phase before barfing on missing standard libraries.
I think I may have a library issue. Easy test, try the same compile on a Debian chip where I’ve compiled all of GIStemp before (with LOTS of odd FORTRAN in it…)
No sense spending time trying to debug the port if I don’t know the tool chain works on complicated things. That random generator code is pretty simple, so I’m suspecting the Arch Linux FORTRAN isn’t quite right (or my calling sequence is way off somehow… those default libraries in the set up script).
In any case, it’s late and I’m taking a break on this until tomorrow…
Oh that “one last thing” moment… I was about to log out and noticed the two RANVAX* seemed awfully similarly named:
OK, with the “CALL RANDOM” it compiles without the strange C like error messages, but with a complaint about one of the arguments. Does that mean it would compile completely if that argument issue were resolved, or just that it would THEN reach that old error message state. Don’t know right now, but it is an interesting difference.
Hi Chief. when making first debug runs you might make the grid size 16 by 20 degrees instead of 8 by 10. Then it might run twice as fast haha.
x
8 x 10 = 80
16 × 20 = 320
That’s 1/4 the cells for the same area, so it ought to be 4 x as fast…
But first I need it to run at all…
check your pub4all and see if the test works. your donation link is outdated…pg
@P.G.: OK.
@All:
Well, I got it to compile most of the stuff. I figured perhaps ‘main’ was missing as it was somewhere built by the Makefile. I copied the Makefile.gfortran to Makefile.pi and commented out the ‘LIBS’ line. It worked all the way down to a make of “README.o” where “README.f” was missing (as there isn’t one…) so I need to look at the top level Makefile (that I think makes it from README.f.in) and do that step first.
The Makefile.pi as it stands now:
This was done on a generic Debian Jessie, but I don’t think that matters. As this browser is being a pain (crashing) I’m likely to not use this system for the rest ;-)
Here’s a listing for the directory at the moment. You can see the .o files it made as output (and that .tarx is my format for moving things around in a compressed tarball…)
So some substantial progress. Looks like mostly I need to just clean up the Makefiles for it all to work on the Pi, then test the final result. “We’ll see”…
OK, the top Makefile has this in it:
You can see that last line makes the README.f file from README.f.in via stuffing in the subversion (SVN) release number. This pi doesn’t have svn installed, so errors on that call to the shell to set SVNVER. I’m not interested in installing svn, so I’m just going to hardcode that to 0.0 and “move on”. Having a makefile just to do that svn version setting is kinda silly anyway… The key bit is to make the README.f file that contains, basically, version header info to be stuffed into things later…
So, ok, patching around their ‘too clever by 1/2’ version setting and odd use of README… and “moving on”…
In “Makefile” I changed to this:
forcing the svn version number to 0.0 until such time as I decide what to do about it…
then the Makefile runs (just making README.f and advising what to do next):
For me, it will be “make -f Makefile.pi” and I’m not bothering with the ‘make clean’ as I’m just getting things to compile…
Looks like I’m up to that “deal with the ‘_’ problem” it advised about:
So it found all the .o object files (including the now extant README.o) and tried to link them all. At that point, the FUNCTION_ stuff started to stack up. OK, I can work with that. It had, earlier, said that would pop up. Essentially up to now has been fixing the Makefile settings and getting compiler flags and such set right. Now I can start the actual porting / debugging of the model code. That, then, sends me back to the advisory about gfortran not liking the underscores and where I need to change them… so that’s where I’m off to next.
Removing the underscores (as it advised for gfortran) from the function calls in setpath.f and RANDVAX.f results in:
What looks like a successful compile and link.
Guess it’s on to first cut testing ;-)
Here’s the modifications to the two programs:
Well, it looks like I may get to make a bunch of input files by hand. I’ve got some more poking around to do to see if there is a default set somewhere, but for now this is what I’ve found:
http://edgcm.columbia.edu/ModelII/InputFiles.html
Many of them are only used with particular settings, so I might be able to make a minimal subset that lets me find out if the model runs at all.
This is likely to keep me busy for a while… “we’ll see”…
Nice to know that if you don’t get the input files just so the model crashes… bolding mine.
from: http://edgcm.columbia.edu/ModelII/InputFileRules.html
Well, working down their list of documentation postings… this one is more interesting than I’d expected:
http://edgcm.columbia.edu/ModelII/NameList.html
It contains some cryptic hints about how to do the initial startup:
But more interesting is some of the things in the model that might be available to play with. Like planetary parameters:
Oh Really?… “Jim” (one presumes Hanson?) doesn’t “like” a solar correction factor…
Interesting….
Oh, here’s a nice bit (same file) where it lists the ‘forcings’, or what control knobs the code presumes drives the globe:
So it has a solar constant in it, and is set up to accept change over time ( I think) if you load that into a file. I’d want to divide things into UV vs visible vs IR, but hey, all things in their proper order… first get it to run at all…
Everything else is a diddle factor based on the Greenhouse Gas Thesis… so if the “solar constant” is kept constant, or only varies a tiny bit, then you MUST have the GHG effect to ‘tune’ the model to historical knowns…. Thus the cycle of belief begins… OK, so once this thing is running, my first thing to do is beef up the solar stuff, then add UV variation by altitude… but not, I think, today….
Gee… another interesting bit:
http://edgcm.columbia.edu/ModelII/About.html
So the ocean has a seasonal swing to it, and has a little regional variation, but things like, oh, the Thermohaline Circulation and the Gulf Stream shifting overall global ocean heat are not going to change in response to either the sun or the gas ‘forcings’… Interesting. That thing which, IMHO, is MOST likely to be critically important to a Little Ice Age in Europe, or the onset of a global Ice Age Glacial, or even sporadic warming when it goes the other way, doesn’t change…
OK, I think that’s enough progress for one morning. I suppose that, now that I’ve got it compiled and found some interesting bits, it’s time to “RTFM”… Read The (ah….) Friendly Manual…
They have a load of docs at:
http://edgcm.columbia.edu/support2/documentation/
And I suspect it will take me more than a day or two to wade through them all…
With that, I’m “checking out” from the model porting business for the next good while while I curl up with a tablet, a cup-a, and some model operations PDFs to read…
” FUNCTION RANDU (X)
IBM 360? Really? Man that bit of code has been kicking around a long time…”
I had to smile!
My first program, in 1968, used RANDU (on a 360) for simulating ‘genetic drift’ in small isolated breeding populations. (U for uniform RANDom numbers between 0 and 1)
Good luck with the simulations – it keeps the brain well exercised!
Have you read this claim? http://www.coolingnews.com/enso-ann
If true would it be worth incorporating when you run out of things to do? ;)
Hi Chief. This PDF link says how GCMs are setup to magnify co2 effect. From ocean cooling post by win most at WUWT. … if you need to take an off topic break, heh.
Click to access gray2010_heartland.pdf
@Sandy:
I think it was still arround in 1973? when I learned FORTRAN IV …
I’ll read the link in a moment…
@Pearce:
Yeah, a break would be nice.. just finished a cursory scan of the source code…
@All:
I’ve finished the docs and a rough scan of the source code. THE thing that stands out most is just how stongly the model is tuned to make CO2 causal.
The docs mostly cover the edu GCM parts, that are the GUI for controlling input and doing graphical displays (and NOT included in the model source code…). There is a Hanson paper there (from 1987?) that does explain the operation. That paper also describes various sensitivities found in different runs. Essentially, lots of things get tailored to show just the right results HOLDING SUN CONSTANT and ASSUMING CO2 has a given effect. I’ll be doing a specific posting on that paper.
THE biggest headache at the moment is those huge input parameter files. They are NOT included with the model source, but burried in the eduGCM part (separate download, at a price to keep it, and with licence entanglements). So I need to accept those handcuffs, or recreate the input files (to which the model is very sensitive and which need careful tuning…)
So time to step aside for a while and let it simmer….
All this computer gibberish makes my head spin but I did pick up on a couple of points.
The ModelE is based on a series of Hansen papers. Then there is the mention of needing 88 cores to run the code in a reasonable time. It all sounds really impressive until you realise that ModelE can’t do a decent backcast. If it can’t explain the past why waste a minute on this piece of garbage?
@G.C:
Because it is the hammer with which we are beaten, regularly. So I’m setting out on what is likely a long journey to make my own battle hammer… based on a corrected one of theirs (so it can’t be totally tossed as amature garbage code… but the differences from theirs must be attacked, therefore addressed…)
Worst case is, it showes how the results are largely assumption and parameter driven…
@Pearce & Sandy:
Both links are interesting. Both suffer from the “I have a hammer so everything is a nail” problem. They do a nice job of presenting alternative drivers, but leave out any numerical size measured… If you can’t show the size matches the history, it can mean everything or nothing…
The Ocean MOC paper (2nd link) looks best to me, and does some sizing; but ignores what drives the MOC to change… so needs one more level of depth…
The ANN posting suffers from being ANN based. Neural Nets have the problem that they are trained and learn, but you don’t know WHAT they learned. There’s the old story of the N.Net trained to spot Russian vs. American tanks in photos. Eventually was 100% accurate, even on partial tanks. So the researchers started cutiing down the photos. How little tank was enough? Well, it got 100% even with parts of the photo with NO tank in it. Just leaves or even sky…
It had learned that the USA tanks were shot with 100 ASA while spys used 6400 to 3200 ASA film. It learned to measure the grain in the film used… so not useful as a battlefield tank spotter…
So I’d need to see more before believing his ANN correctly ascribed causality…
That both point out alternative mechanisms, presently being ignored,, says a lot.
It is possible that elements in the logic of the code are correct but stitched together in a manner that results in the wanted answer rather then the correct one. Such as Mann inverting part of the tree records to yield his wanted temperature proxy.
We may well have had well meaning contributors, poorly led coders being directed to present preconceived answers to justify enlarging grant funding as well as “Scientific Stature”.
The question becomes, Do you throw out everything and start over or look for salvage?
These guys spent hundreds of millions of dollars and years of effort to create this “Master Piece”. Can it be patched or modified without too massive an effort? There is now 25 years more data and we know that earlier records are being modified to make the model code yield the required answers…pg
E.M.
If you’ve got what I suspect you’ve got see how they handle plant evapotranspiration.
I was at a seminar at CSU in 1988 and the plant physiologist giving it had just seen the workings of the “latest and greatest”.
And he was horrified. Plant ET was handled with a model of ONE plant stomate extrapolated to the world! Maybe just a bit of fudging room here?
Early 2000’s I told that story in company of people here around the global gcm scene and said that I hoped better computers and more information had improved things. To be told that if anything they were worse.
Colour me a rejecter until there is better bloody evidence of a link between CO2 and CAGW
I took a look at this (dang, has it really been that long?) in the ’07 to ’09 time frame. I am not sure that I still have an archive of EdGCM from back then. I may pick this computer game up again.
EM – this dependence on the input data being within very tight bounds in order that the program doesn’t spiral out of control seems indicative of a lot of positive feedback inside the program itself. On the other hand, of course, we see that the real world must have negative feedback even though there are two stable temperatures available, with somewhat of a barrier between them, so that it will tend to stick in one range until it flips to the other.
If the model itself was correct, then feeding it input data that is outside the bounds should iterate to conditions in one of the two stable temperature bands. If the information on the performance of the program with a “bad” dataset is correct, then the program instead has a single narrow band of relative stability and if for some reason it goes outside that then it won’t get back again. Such a program will predict disastrous consequences (either spiralling temperatures to a Venus-like set of conditions) or an ice-ball, once you step outside the current set of conditions.
The dataset itself sounds to be huge and would itself take a lot of time to input. With only an 8° by 10° grid, though, that gives only around 810 cells for which the data needs to be put in. It also seems that the dataset from one cell to the next may not change a large amount, with a lot of copy/paste being possible in setting initial conditions. I haven’t seen a definition of the vertical structure of each cell, which would probably multiply that initial number of cells by some number, but the air-only cells would likely also have a lot of parameters the same to start with.
At the moment, I’m seeing the insistence on the input data being within narrow bounds as being a tacit admission that the model is wrong. Though GIGO will obviously apply, if fed a dataset that is wrong it ought to iterate to one of the known stable temperatures. It’s going to take a lot of work to find out what assumptions are wrong, and maybe even to tease out the underlying assumptions and equations.
Trouble is, that’s only one of the 14 or so models in use at the moment.
The neural net and tanks story is informative. People are experimenting at the moment with computer-written code that evolves until it does the job required. Cheaper even than those H1B visa people, but also has the problem that you don’t know what it’s actually doing but only that it gives you the right answer as far as you’ve tested. Miss something out of the testing and some bad data going in could give surprising results. With pressure from the accountants to make the testing phase cheaper and quicker, it’s easy to predict some “interesting” results.
I’ll look forward to seeing what results you get, but I think that you’ll probably end by proving that the code was ill-specified in the first place. There may however be a good basis for the structure in there – they can’t have got it totally wrong.
Hi Chief. With so many unanswered questions popping up on the EdGCM and the huge data set why not fall back to the GCM ii, the earlier version and maybe it’s more general and usable.
I also saw some JavaScript models on internet. They may be too simple however.
Regards, Pearce M. Schaudies.
Minister of Future
Hi Chief. If you haven’t chekd the wiki, it tells more than you might need. Also, there may be a pony hiding in there, heh.
https://en.m.wikipedia.org/wiki/General_circulation_model
Hi Chief. Here’s another link. Some models can download source an compile.
https://www.gfdl.noaa.gov/model-development/
Pearce,
I am just using the GCM Model II. But the only way to get the sources is to extract them from eduGCM as NASA has removed them from public access. Essentially it looks like they cut a deal to let edGCM make money off it and publish it wrapped in a nice GUI to isolate you from the FORTRAN.
As it is older, smaller, and simpler, I’m starting with the Model II code as a stepping stone to Model E, that I’ll be looking at next. I suspect it is an incremental advance and much of the core will be based on Model II. If need be, I can likely make a lighter and faster MedelE based system by cutting it back to the precision of Model II… I found a reference stating the computes go up as the 4th power of resolution… (3 physical dimensions plus time step?) So likely a small reduction in granularity could make a big reduction in hardware needed… in which case, the Model II choices for tuning would be good to know. So cut resolution in half, your 88 cores would become 2x2x2x2= 16 88/16= 5.5 or 6 cores. That is high end desktop scale today.. The Model II docs say low res works well enough… so…
Oh, and I probably ought to note that for anyone just wanting to download and run the edGCM version, the link is here: http://edgcm.columbia.edu/download-edgcm/
It demands an email address (then sends a link to the actual download) and then also starts your “30 day free trial”… and with some (to me at least so far) unknown cost to keep using it.
Nice that they are making money off of the NASA public domain software (and their added private GUI wrapper…) I guess… but having the config files available for download along with the GCM itself would be a “good thing” for those of us playing in Unix / Linux land… Oh Well…
(A case can be made that programming consists of data structures and algorithms. Having only the algorithms available via the software and demanding money to get to the data structures that make it run is at best a violation of the ethics of making the public portion of GCM Model II available… Their GUI wrapper and process is theirs, as is their database design, and charging for that is their privilege).
Pingback: Inside GCM Model II | Musings from the Chiefio
I wonder if a FOI request from Heritage Foundation or some similar group could shake loose those data structures for public access?
If you have a two part process and only one of them is openly available you have not made the process a “public record” even though it was developed with public funds at tax payers expense.
Larry:
One could always take the 30 day ‘trial’ and extract the data structures from their database, then make the case they are public data portions. I’m not seeing the need, though.
Today I did a quick look at ModelE. It is available with all the “rundecks” and input files and several of them are the same basic data (just more resolution in the data files). So worst case I just ‘derez’ some of them (i.e. average / interpolate the larger grid cell data of Model II)
But even beyond that: Looking at the computes needed, and their idea of a ‘reasonable run time’ and then mixing with that 1/2 the resolution gives 1/16 the CPU demand, I’m thinking that a ModelE port might be ‘reasonable’. (It already is touted as running on most any Linux / Unix…)
Then season with the point that only the full ocean dynamics with atmospheric chemistry needs all 88 cores, and Model II doesn’t do them anyway… so a “Model II like run with Model E” is likely in the ‘doable’ range.
So after a look at the Model II code to familiarize with it, I’m going to do a cursory scan of the Model E code just to see if it is at all similar (I expect the core is remarkably the same, given the general GISS approach of reused bits and an archeological dig aspect in the comments… IBM 360? RS6000?… )
IFF all that ‘checks out’, I’m going to try a run of the reduced features Model E just to size things, then cut it back as needed. (Simply reducing grid scaling ought to be pretty easy… add data points together by 2s for the input files, and cut array bounds in half, ought to cover a lot of it.) Heck, I might not even need to do that. It supports distributed (MPI) processing, and with the Model II speeds saying it does days worth of runs in minutes to hours, using 12 cores of Pi on a reduced features Model E might just give a usable model run in a day or three; and that’s FINE with me.
In all cases, it’s a while more whacking on code before “design decisions” ought to be made. I’m also comfortable that ‘enough’ of the input parameter file data is readily available (even if the form is a bit off) that I can make even Model II “go” … though at the expense of a week or two of data herding. I’ve done worse…
In the end, it reduces to:
Collect and format a load of input data files from reasonably available sources (even if it takes sucking it out of images on a Hansen paper…)
v.s.
Trim back the ambitions of Model E and maybe do a systematic data array compression.
Neither one looks particularly hard (though booth look a bit the bother…)
If, in the end, I get a Public Domain Free Licensed Climate Model with some of the silly squashed out of it, well, that’s a pretty big feature…
Of all the “issues”, the one that bothers me the most is just that my tests of Parallel FORTRAN on the Pi have had it run SLOWER than as straight code. One hopes the MPI method is not so afflicted. (I need to create and run some kind of benchmark on that…)
I’ve been running with OpenMP, not MPI, so “your milage may vary”… OpenMP looks like:
which gives a compiler directive to unroll the “do loop”. MPI spins out larger chunks of programs for remote running on other CPUs / computers, not just multiple threads in a multicore chip. So I have some hope MPI will work better on the Pi.
But that’s way down the road and deep in the weeds. For right now, I’ve got some 1988 vintage FORTRAN to wade through…. (Anyone got some hotsauce? This code is mighty chewy and flavorless and could use help in the swallowing ;-)
It would be nice if you can find a way to go through the back door to get open source data files to make the model work and side step their data down load profit center.
Given all the digging that took place during email-gate, someone might have harvested those files somewhere from an open FTP server that they did not close down. As I recall they “leaked” a bunch of information around that time by sloppy management of their files, and people found open access files that were available for the free download at the time.
Maybe some of the data packrats over at WUWT have archived what you need. If so it might also be better quality being compiled before all the recent adjustments.
@Larry:
Don’t stress over it. The Hanson 87? paper has some of it included, and at the grid scale used the resolution is crap anyway, so easily made from the available data. More importantly, I’ve started looking at ModelE and like the way it uses MPI. It ought to parallelize much better and easier than Model II AND is designed to run on a cluster of different architecture machines. This means, for example, that mixed Pi and Intel boxes ought to work fine. That’s a BIG deal…
Though, per ModelE, it’s a bit bigger with a LOT more moving parts (even if many are small)… A brief “look ahead”:
OMG, 309 K lines of FORTRAN code?… well at least most of them can just be accepted as is and the input files molested instead ;-)
After all, it is only 310 different FORTRAN programs ;-)
I think I can make something functional that’s at the 1/2 way point between Model II and ModelE…
Heh, I found my archive. I created it in 2008. It has 2007 dated files in it.
Pingback: Carping Comments – Idiots and “poseur” | Musings from the Chiefio
Pingback: Carping Comments – Idiots and “poseur” | Musings from the Chiefio
@P.G.:
I’ve saved the GHCN V1 and V2 data… One of my “someday” projects is a MySQL database loaded with all the versions I have so I can compare and contrast the “raw data” from different versions.
Oh, and I’m going to check my email, honest! Just some 10 year old Speyburn hauled me aside and beat me senseless ;-) It’s not my fault, though, he called my Pi Model 3 “A Garbage Scow!”… I mean, I can take a few insults, that that was too much! ;-)
Per the model:
Near as I can tell, where they have physics, they correctly code it. Where it is not possible at that scale, they apply a ‘good approximation’ (so for example, sub-grid scale clouds get a ‘plug number’ based on what they measure happened in {some grid} in {some place} at {some time} and extrapolated instead of modeled.
Where they don’t know, they “make a good guess”.
Near as I can tell from the first cursory review (and subject to massive change once I get deeper in it) they are, in fact, making a good faith effort to get the physics right. BUT, some of the factors are “fudge” as they don’t know or can’t model at that scale. Then they make the fatal mistake of believing they got it right…
IMHO: “Models are best used to ‘inform our ignorance'” -E.M.Smith
So, for example, some kind of general vegetation plug values are used (there’s an input file with the type of vegetation in each grid cell as gross values) and they assume that they have the physics of evapotranspiration right for all mixes at all times of day in all climates. Yet we know cactus has a very different behavior from Oaks and Pines, and that since the model was built we’ve learned many kinds of trees control EvTr to try to maintain an 86 F canopy temp. NOT in the model…
As another example, they set a TSI value (and even vary it with orbital mechanics), yet completely MISSED that UV can vary by 10% (and IR) during long term solar cycles and that changes where the energy goes, making all those atmosphere energy layer calculations a bit daft… as they don’t include that.
It is that kind of stuff where I need to dig in and find the “ignorance” that needs informing.
Now, what they did do (per the readings not in this article) is find the model unstable and not conformant to reality is many cases. So they tuned the input (those missing input files for things like land cover type and all) until it reasonably well modeled reality in the past. The assumption being that this made the model right when it just makes the model consistent with history given the wrong formulas. That is where they went fully off the rails, IMHO. That, too, is likely where they got too much moisture in the upper atmosphere.
Now my approach is different. I’m going to get it going, then set it so it gets the atmospheric moisture right. THEN look at the output and how it differs from reality to “inform my ignorance” about what is likely missing that would change that.
Similarly, add in a UV / IR layer distribution and separate handling of UV, Visible, IR energy distribution on insolation. Then run with a “solar cycle” driver file with the solar state history. At that point, we ought to see the Great Pacific Shift of the middle 70s as it got warm in a step change AND the present step change to cooler (though perhaps with lags…). IF not, I won’t throw out that code, I’ll use it to “inform my ignorance” about what else might be needed.
That’s the fundamental difference of approach. They TRUST the model to be right. I EXPECT the model to be wrong and tell me where to look next. (Until such time as it IS right and matches reality AND predicts the future accurately…)
@CDQuarles:
IF you happen to have any of the input files, I’d be very interested in getting a copy…
@Simon:
Yeah, that instability issue speaks to me too. Clearly we have two stable ends with hysteresis between them. Ice Age Glacial and Interglacial. This seems driven by the tilt of the earth (TSI above 60 N) and land mass distribution on geologic time scales. Shorter term, ocean current cycles drive much of it.
On a “someday” basis, having a better ocean model would be needed. For now, it’s likely good enough given stationary land masses, not significant ocean depth change, and modest changes to ocean heating. Unfortunately, adding UV to deep ocean vs not is likely going to be a ‘bit wrong’ if it isn’t allowed to shift ocean circulation…
But if you don’t have 2 stable states (iced and not-iced) you are wrong. Likely limited by ice formation physics at the cold end and water vapor heat transport to the tropopause at the warm end, and with the warm end good…
I’ll check it but I don’t think there are any input files. How should I get it to you?