Dear co-Scilabers,

I met a strange Scilab's bug this week-end. But today, i tried with Octave, Matlab2016 and R, and i found the same strange behavior. So, either i am missing something, or the bug affects all these languages in the same way. It is reported @ http://bugzilla.scilab.org/15276

In a few words, here it is:

The mantissa of any decimal number is recorded on 53 bits (numbered #0 to #52) + 1 bit for the sign.
This relative absolute accuracy sets the value of the epsilon-machine :
--> %eps == 2^0 / 2^52
 ans  =
  T
From here, it comes, as expected:

--> 2^52 + 1 == 2^52
 ans  =
  F

--> 2^53 - 1 == 2^53
 ans  =
  F

--> 2^53 + 1 == 2^53   // (A)
 ans  =
  T

Now comes the issue:
In (A), the relative difference 1/2^53 is too small (< %eps) to be recorded and to change the number. OK. Since 1 / (2^53 +2) is even smaller than 1 / (2^53), it should nor make a difference. Yet, it does:

--> (2^53 + 2^1) + 1 == (2^53 + 2^1)
 ans  =
  F

How is this possible ??!
But this occurs only when THIS bit #0 is set. For higher bits (hebelow the #1 one), we get back to the expected behavior:
--> (2^53 + 2^2) + 1 == (2^53 + 2^2)
 ans  =
  T

So, when the bit #0 is set and we add a value on the bit "#-1", the language somewhat behaves as if there was a "withholding" value on the bit #0, and seems to toogle it. Is is a part of any IEEE floating point convention, or am i missing anything, or is it a true bug?
Again, R, Octave and Matlab behave exactly in the same way...

Looking forward to reading your thoughts,

Samuel




_______________________________________________
users mailing list
[email protected]
http://lists.scilab.org/mailman/listinfo/users

Reply via email to