Hi, > MOOSE puts all their test output into exodus files and uses exodiff. > That has the advantage of being structured enough that it can be diffed > with rtol and atol. > > OTOH, we have a challenge that's mostly distinct from a discretization > package. We're not testing error in a discretization (which is > unchanging as long as the discretization doesn't change), we're testing > the intermediate, unconverged values, and comparing error using relative > tolerance (versus absolute tolerance, which would be better).
It's also a matter of the need for a 'yes/no'-testing. Running a fixed test like if (err > eps) fail_test(); is probably too harsh and we instead use some kind of continuous metric to judge the outcome of the test. Speaking of a HTML table, something that is spinning in my head for a long time already is that one can easily draw diagrams automatically showing the convergence history of the residual norm obtained in a test run. Coloring the frame of the plot proportional to the relative deviation from a reference convergence history gives you a quick idea of how far a test is off the reference. It won't work for all tests, but it gives you on idea about the sanity of the implementation. > As we attempt to make our interfaces better for graphical front-ends and > automatic high-level controllers, I think we should try to use monitors > that provide structured output. This could be a JSON file with object > identification and convergence history or perhaps a sqlite database. I > suspect we could deal with most of our FP-sensitive testing with only a > handful of structured monitors. Providing this structured output is > providing an API so we should try to rapidly converge on an extensible > data model that can be relatively stable. I'm afraid I don't know enough details on how testing is done currently in order to contribute something useful to the discussion on such structured output... Best regards, Karli
