On Wed, Feb 28, 2024 at 7:38 AM Stavros Macrakis <macra...@gmail.com> wrote:

> On Wed, Feb 28, 2024 at 10:12 AM Raymond Toy <toy.raym...@gmail.com>
> wrote:
>
>> On Wed, Feb 28, 2024 at 5:44 AM Camm Maguire <c...@maguirefamily.org>
>> wrote:
>>
>>> ...
>>> > I hope you will consider changing the default to trap on invalid
>>> > operations so that instead of returning NaN, an error is signaled.
>>> ...
>>> >
>>>
>>> This is good to hear, as I was of the opposite impression that most
>>> users wanted NaNs to propagate freely.
>>>
>>
>> Maybe you should take a poll or something.  It could be that I'm just a
>> terrible numerical programmer.  But in my experience, it's fairly easy to
>> produce an infinity somewhere which then gets subtracted from infinity that
>> then becomes NaN which then poisons all further computations.
>>
>
> I disagree vehemently. The behavior of NaN is extremely useful. For
> example, it is nice that [1,2,3]/[2.0,0.0,2.0] returns [0.5,NaN,1.5] rather
> than just ERROR.
>
> R found this kind of behavior so useful in statistical calculations (where
> "missing value" is a reasonable thing to calculate with) that they added it
> to *all* numeric types, not just floats.
>

I think we'll just have to agree to disagree.  In a previous life I used to
do simulations for the physical layer for cellular systems.  These would
often take many hours (or days) to run.  If I messed up and accidentally
did 1/0 because I forgot to initialize something, then after hours or days,
all of the printed results were basically NaN.  I could have saved hours or
days if I got an ERROR at the point where I divided by 0.  IIRC this
usually happened fairly early in the code, not at the end.

I guess this experience has influenced my strong desire to get errors
signaled as soon as possible.  But I don't turn on underflow traps.
Underflows for my simulations were generally harmless.  Or at least I don't
remember underflows causing bad results.

>
>         -s
>


-- 
Ray

Reply via email to