Hi Oliver,

Here I will only comment on point 1.

The aspect of running ARTS interactively was not pointed out before and then not considered in my answers. With that in mind it seems reasonable to replace most asserts with errors. (Most, as I agree that the index checking in matpack should be left).

So I am favor of your suggestion (without having time to consider the details right now). I have just one wish. Write down some instructions on how the new macros shall be used, and when assert possibly should be applied. We should document these things better in the future.

I suggest to add the instructions to the ARTS developer guide. There is already a small section on use of asserts (sec 1.6).

(To switch to the new macros can be a task for the "ARTS cleaning week" in September!)

Bye,

Patrick

On 2019-03-26 11:46, Oliver Lemke wrote:
Hi Patrick and Richard,

I think we're mixing up two issues with error handling in ARTS which should be 
discussed separately:

1. There are instances where ARTS currently exits non-gracefully which is 
painful esp. for python API users.

2. How to better organize where errors are checked and which checks can be 
skipped in release mode.

I will concentrate on point 1 in this email because that's what I think 
triggered the whole discussion. For users who mainly use the ARTS API, asserts 
and uncaught exceptions are painful. Since their occurrence leads to process 
termination, this means in case of the Python API, it will kill the whole 
Python process. This is very annoying if you're working in an interactive shell 
such as ipython, because your whole session will die if this occurs. Therefore, 
our main focus should first of all be on fixing these issues.

Currently, when we catch exceptions at higher levels such as main agenda 
execution, ybatch loop or the ARTS API interface, we only catch exceptions of 
type std::runtime_error. If an unforeseen std::exception from any library 
function is thrown, it'll lead to program termination. This issue is rather 
easy to solve by explicitly catching std::exception in addition to 
std::runtime_error in the appropriate places and handle them just as 
gracefully. I will take care of adding the code where necessary.

For assertions, the fix is a bit more involved. Since they can't be caught, we 
would need to replace them with a new mechanism. As the benchmarks have shown 
we won't lose (much) performance if we use try/catch blocks instead. There are 
very few exceptions such as index operators in matpack which should better be 
left alone.
After discussion yesterday with Stefan, we came up with the following proposal: 
Introduce a couple of convenience methods (macros) that can be used similar to 
how the standard assert works now:

ARTS_ASSERT(condition, errmsg)

This would work the same as 'assert' except that it will be throwing a 
runtime_error in case the condition is not fulfilled. Also, this statement will 
be skipped if ARTS is compiled in release mode. They could also be turned off 
in any configuration by the already existing cmake flag '-DNO_ASSERT=1'. If 
anyone feels that the current name of this option doesn't properly reflect its 
purpose, it can be renamed of course.

ARTS_THROW(condition, errmsg)

This will be the same as ARTS_ASSERT except that will always be active.

Both macros will take care of adding the function name, filename and linenumber 
to the error message.

More complex checks have to be implemented in a custom try/catch block of 
course. And blocks that should be possible to be deactivated should be placed 
inside a preprocessor block such as DEBUG_ONLY. Again here, if the name seems 
inappropriate, another macro that does the same thing with a better fitting 
name could be introduced. So we could have IGNORABLE as an alias to DEBUG_ONLY 
and -DIGNORE_ERRORS=1 as an alias for -DNO_ASSERT=1 . I don't see why we need 
both because in Release mode we would want to activate both options anyway, 
right?

With respect to point 2., I don't see yet a clear strategy or way to give a 
clear definition on which errors should be ignorable and which not. Since the 
past has clearly proven that is already difficult to decide when to use an 
assertion and when a runtime_error, I don't fancy the idea of introducing a 
third class of errors that's defined as 'Can be turned off if the user knows 
what he's doing'. Correct me if I'm wrong Richard, but that's basically what 
you want to achieve with the proposed 'IGNORABLE' flag?

Cheers,
Oliver


On 25 Mar 2019, at 19:47, Patrick Eriksson <patrick.eriks...@chalmers.se> wrote:

Hi Richard,

I can agree  on that this is not always critical for efficiency as long as the 
check is a simple comparison. But some checks are much more demanding. For 
example, the altitudes in z_field should be strictly increasing. If you have a 
large 3D atmosphere, it will be very costly to repeat this check for every 
single ppath calculation. And should this be checked also in other places where 
z_field is used? For example, if you use iyIndependentBeamApproximation you 
will repeat the check as also the DISORT and RT4 methods should check this, as 
they can be called without providing a ppath.

Further, I don't follow what strategy you propose. The discussion around planck 
indicated that you wanted the checks as far down as possible. But the last 
email seems to indicate that you also want checks higher up, e.g. before 
entering interpolation. I assume we don't want checks on every level. So we 
need to be clear about at what level the checks shall be placed. If not, 
everybody will be lazy and hope that a check somewhere else catches the problem.

In any case, it should be easier to provide informative error messages if 
problems are identified early on. That is, easier to pinpoint the reason to the 
problem.

Bye,

Patrick



On 2019-03-25 12:24, Richard Larsson wrote:
Hi Patrick,
Just some quick points.
Den sön 24 mars 2019 kl 10:29 skrev Patrick Eriksson <patrick.eriks...@chalmers.se 
<mailto:patrick.eriks...@chalmers.se>>:
    Hi Richard,
    A great initiative. How errors are thrown can for sure be improved. We
    are both lacking such checks (still to many cases where an assert shows
    up instead on a proper error message), and they errors are probably
    implemented inconsistently.
    When it comes to use try/catch, I leave the discussion to others.
    But I must bring up another aspect here, on what level to apply asserts
    and errors. My view is that we have decided that such basic
    functions as
    planck should only contain asserts. For efficiency reasons.
Two things.
First, Oliver tested the speeds here.  The results are random in 
physics_funcs.cc:
number_density (100 million calls, averaged over 5 runs):
with assert:    0.484s
with try/catch: 0.502s, 3.8% slower than assert
no checks:      0.437s, 9.8% faster than assert
dinvplanckdI (20 million calls, averaged over 5 runs):
with assert:    0.576s
with try/catch: 0.563s, 2.3% faster than assert
no checks:      0.561s, 2.7% faster than assert
but with no notable differences.  (We are not spending any of our time in these functions really, 
so +-10% is nothing.)  One thing that asserts do that are nicer that they are completely gone when 
NDEBUG is set.  We might therefore want to wrap the deeper function-calls in something that removes 
these errors from the compilers view.  We have the DEBUG_ONLY-environments for that, but a negative 
temperature is not a debug-thing.  I suggested to Oliver we introduce a flag that allows us to 
remove some parts or all parts of the error-checking code on the behest of the user.  I do not know 
what to name said flag so the code is readable.  "IGNORABLE()" in ARTS and 
"-DIGNORE_ERRORS=1" in cmake to set the flag that everything in the previous parenthesis 
is not passed to the compiler?  This could be used to generate your 'faster' code but errors would 
just be completely ignored; of course, users would have to be warned that any OS error or memory 
error could still follow...
The second point I have is that I really do not see the points of the asserts 
at all.  Had they allowed the compiler to make guesses, that would be somewhat 
nice.  But in practice, they just barely indicate what the issues are by 
comparing some numbers or indices before terminating a program.  They don't 
offer any solutions, and they should really never ever occur.  I would simply 
ban them from use in ARTS, switch to throws, and allow the user to tell the 
compiler to allow building a properly non-debug-able version of ARTS where all 
errors are ignored as above.
    For a pure forward model run, a negative frequency or temperature would
    come from f_grid and t_field, respectively. We decided to introduce
    special check methods, such as atmfields_checkedCalc, to e.g. catch
    negative temperatures in input.
I think it would be better if we simply removed the *_checkedCalc functions 
entirely (as a demand for executing code; they are still good for sanity 
checkups).  I think they mess up the logic of many things.  Agendas that work 
use these outputs when they don't need them, and the methods have to manually 
check the input anyways because you cannot allow segfaults.  It is not the 
agendas that need these checks.  It is the methods calling these agendas.  And 
they only need to checks for ensuring they have understood what they want to 
do.  And even if the checked value is positive when you reach a function, you 
cannot say in that method if the check was for the data you have now, for data 
sometime long ago in the program flow, or just a bit someone set before calling 
the code.  So all the inputs have to be checked anyways.
For the atmfields_checkedCalc call in particular, it would make more sense to 
have this part of the iy-methods more than anything else.  Since ppath is 
generated externally now (compared to when the iy-methods last had an updated 
error-handling), ppath needs also be checked with the fields themselves anyways 
or you can have segfaults in the interpolations.
    When doing OEM, negative temperatures can pop up after each iteration
    and this should be checked. But not by planck, this should happen on a
    higher level.
    A simple solution here is to include a call of atmfields_checkedCalc
    etc. in inversion_iterate_agenda. The drawback is that some data
    will be
    checked over and over again despite not being changed.
    So it could be discussed if checks should be added to the OEM part.
    That
    data changed in an iteration, should be checked for unphysical values.
    That is, I think there are more things to discuss than you bring up in
    your email. So don't start anything big before we have reached a common
    view here.
Right, I won't.
With hope,
//Richard
_______________________________________________
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi

_______________________________________________
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi

Reply via email to