More on Testing¶
MPAS-JEDI comes with an extensive suite of ctests. These tests are designed to ensure that as much as possible of the source code is regularly exercised, and that generic applications used in larger scale experiments are tested. Testing is an integral aspect of the development and experiment process. It is recommended that the tests are run every time the code is built. Developers working on a new feature in the MPAS-JEDI repository should ensure that all existing tests pass before submitting a pull request to github. We also request that some attempt is made to ensure that new code is exercised by an existing test or a new test. There are exceptions for code that is exercised extensively and continually in cycling experiments with verification, and for diagnostic tools that are not automatically tested yet.
Continuous integration¶
MPAS-JEDI is instrumented with a continuous integration (CI) suite running on Amazon Web
Services. Each time a pull request is issued against the develop
branch in the
JCSDA-internal repository, the MPAS-JEDI part of the MPAS-BUNDLE package is built with
Intel, GNU, and Clang compilers. Then all the MPAS-JEDI ctests are executed. A failure from
any of the ctests blocks the pull request from being merged. At this time, MPAS-JEDI is
not instrumented with a code coverage report.
Adding a test to MPAS-JEDI¶
When new codes that prove concepts or implement new features are added to MPAS-JEDI and MPAS-Model repositories used in the MPAS-BUNDLE, corresponding ctests should be added into the standard ctest set. This ensures that future modifications to either of those two repositories, or to the repositories on which MPAS-JEDI depends (i.e., OOPS, SABER, IODA, UFO, CRTM), do not break existing functionalities that are critical to users’ scientific experiments.
All the ctesting in MPAS-JEDI is controlled through mpas-jedi/test/CMakeLists.txt
.
A ctest may be either a unit test, which exercises an individual method in a given class, or
an application test that executes a generic application. Benchmark results are provided
accompanying the ctests. A unit ctest contains results in a reference log file based
on analytical solutions or accurate numerical studies. Each application ctest has an
associated reference based on a previous execution of the same test. To determine the pass
or failure for a ctest, the actual output is compared against the reference log file within
a prescribed small tolerance.
To simplify adding tests to MPAS-JEDI, two macro functions are provided —
add_mpasjedi_unit_test
adds a new unit test and
add_mpasjedi_application_test
adds a new application test. The reader is referred to
mpas-jedi/test/CMakeLists.txt
, where numerous examples exist for both. Note that the
name of the yaml and reference files must match the name of the ctest, e.g.,
test_mpasjedi_forecast
uses the configuration stored in
mpas-jedi/test/testinput/forecast.yaml
and is compared to the reference stored in
mpas-jedi/test/testoutput/forecast.ref
.
If a PR made to one of the repositories used by MPAS-BUNDLE causes the reference values of
many tests to change, it is useful to use the RECALIBRATE_CTEST_REFS
option in
mpas-jedi/test/CMakeLists.txt
.
Additional automated testing on Derecho¶
There are two additional testing mechanisms in place on Derecho that provide automated test coverage.
(1) A daily cron job builds MPAS-JEDI from JCSDA-internal/mpas-bundle::develop
branch, then runs the standard ctest suite. To keep MPAS-JEDI up to date with the latest
development from JEDI infrastructure, the develop branch from JEDI-core repos, including
OOPS
, SABER
, IODA
, UFO
and CRTM
, are checked out to
build the code. This allows us to promptly identify any changes in upstream repositories
that break MPAS-BUNDLE.
(2) A weekly cron job builds the JCSDA-internal/mpas-bundle::develop
branch, then
runs a 6-day 120-km resolution cycling DA experiment. The experiment uses 3DEnVar to
assimilate conventional observations (sondes, aircraft, gnssro refractivity, satwind,
surface pressure) and AMSUA clear-sky radiances (aqua, noaa-15, noaa-18, noaa-19, and
metop-a). The results are automatically analyzed for statistical comparison to GFS analyses.
This test ensures that the MPAS-BUNDLE performance does not diverge far from a benchmark.