Hi Patrick,


Den mån 25 mars 2019 kl 19:47 skrev Patrick Eriksson <
patrick.eriks...@chalmers.se>:

> Hi Richard,
>
> I can agree  on that this is not always critical for efficiency as long
> as the check is a simple comparison. But some checks are much more
> demanding. For example, the altitudes in z_field should be strictly
> increasing. If you have a large 3D atmosphere, it will be very costly to
> repeat this check for every single ppath calculation. And should this be
> checked also in other places where z_field is used? For example, if you
> use iyIndependentBeamApproximation you will repeat the check as also the
> DISORT and RT4 methods should check this, as they can be called without
> providing a ppath.
>

If a bad z_field can cause an assert today, then it has to be checked
every-time it is accessed.

This problem seems simply to be a quick and somewhat bad original
design (hindsight is 20/20, and all that).  To start with, if it has to be
structured, then z_field is not a field.  It is as much a grid as pressure,
so the name needs to change.

And since we have so many grids that demands a certain structure, i.e.,
increasing or decreasing values along some axis but perhaps not all, then
why are these Tensors and Vectors that are inherently unstructured?  They
could be classes of some Grid or StructuredGrid types.  You can easily
design a test in such a class that makes sure the structure is good after
every access that can change a value.  Some special access functions, like
logspace and linspace, and HSE-regridding, might have to added to not
trigger the check at a bad time, but not many.

Since, I presume, iyIndependentBeamApproximation only takes "const Tensor3&
z_field" at this point, the current z_field cannot change its values inside
the function.  However, since it is possible that the z_field in
iyIndependentBeamApproximation is not the same as the z_field when ppath
was generated, the size of z_field and ppath both has to checked in
iyIndependentBeamApproximation and other iy-functions.

However, to repeat: If a bad z_field can cause an assert today, then it has
to be checked every-time it is accessed.


>
> Further, I don't follow what strategy you propose. The discussion around
> planck indicated that you wanted the checks as far down as possible. But
> the last email seems to indicate that you also want checks higher up,
> e.g. before entering interpolation. I assume we don't want checks on
> every level. So we need to be clear about at what level the checks shall
> be placed. If not, everybody will be lazy and hope that a check
> somewhere else catches the problem.
>

There were asserts in the physics_funcs.cc functions.  Asserts that were
triggered.  So I changed them to throw-catch.

I am simply saying that every function needs to be sure it cannot trigger
any asserts.  Using some global magical Index is not enough to ensure that.

A Numeric that is not allowed to be outside a certain domain is a runtime
or domain error and not an assert.  You either throw such errors in
physics_funcs.cc, you make every function that takes t_field and
rtp_temperature check that they are correct, or you create a special class
just for temperature that enforces a positive value.  The first is easier.



>
> In any case, it should be easier to provide informative error messages
> if problems are identified early on. That is, easier to pinpoint the
> reason to the problem.
>

I agree, but not by the magic that is *_checkedCalc, since it does not
guarantee a single thing once in another function.

With hope,
//Richard
_______________________________________________
arts_dev.mi mailing list
arts_dev.mi@lists.uni-hamburg.de
https://mailman.rrz.uni-hamburg.de/mailman/listinfo/arts_dev.mi

Reply via email to