Just digging a bit further in julia:
println("2.0^53+2.0^1 : $(bits(2.0^53+2.0^1))")
println("2.0^53+2.0^1+1 : $(bits(2.0^53+2.0^1+1))")
println("")
println("2.0^53+2.0^2 : $(bits(2.0^53+2.0^2))")
println("2.0^53+2.0^2+1 : $(bits(2.0^53+2.0^2+1))")
gives:
2.0^53+2.0^1 :
0100001101000000000000000000000000000000000000000000000000000001
2.0^53+2.0^1+1 :
0100001101000000000000000000000000000000000000000000000000000010
2.0^53+2.0^2 :
0100001101000000000000000000000000000000000000000000000000000010
2.0^53+2.0^2+1 :
0100001101000000000000000000000000000000000000000000000000000010
So the first two are different while the last two are bit identical.
I anyone can point to the specific bit in IEEE spec for this, I'll be happy!
Antoine
Le 19/09/2017 à 08:58, antoine monmayrant a écrit :
Houps, my bad, julia does exactly the same when using Float64, not Int64:
println((2.0^53 + 2.0^2) + 1.0 == (2.0^53 + 2.0^2))
true
println((2.0^53 + 2.0^1) + 1.0 == (2.0^53 + 2.0^1))
false
It has to be some normal IEEE thing but I did not manage to find the
proper reference for it.
Antoine
Le 18/09/2017 à 20:54, Samuel Gougeon a écrit :
Dear co-Scilabers,
I met a strange Scilab's bug this week-end. But today, i tried with
Octave, Matlab2016 and R, and i found the same strange behavior. So,
either i am missing something, or the bug affects all these languages
in the same way. It is reported @ http://bugzilla.scilab.org/15276
In a few words, here it is:
The mantissa of any decimal number is recorded on 53 bits (numbered
#0 to #52) + 1 bit for the sign.
This relative absolute accuracy sets the value of the epsilon-machine :
--> %eps == 2^0 / 2^52
ans =
T
From here, it comes, as expected:
--> 2^52 + 1 == 2^52
ans =
F
--> 2^53 - 1 == 2^53
ans =
F
--> 2^53 + 1 == 2^53 // (A)
ans =
T
Now comes the issue:
In (A), the relative difference 1/2^53 is too small (< %eps) to be
recorded and to change the number. OK.
Since 1 / (2^53 +2) is even smaller than 1 / (2^53), it should nor
make a difference. Yet, it does:
--> (2^53 + 2^1) + 1 == (2^53 + 2^1)
ans =
F
How is this possible ??!
But this occurs only when THIS bit #0 is set. For higher bits
(hebelow the #1 one), we get back to the expected behavior:
--> (2^53 + 2^2) + 1 == (2^53 + 2^2)
ans =
T
So, when the bit #0 is set and we add a value on the bit "#-1", the
language somewhat behaves as if there was a "withholding" value on
the bit #0, and seems to toogle it.
Is is a part of any IEEE floating point convention, or am i missing
anything, or is it a true bug?
Again, R, Octave and Matlab behave exactly in the same way...
Looking forward to reading your thoughts,
Samuel
_______________________________________________
users mailing list
[email protected]
http://lists.scilab.org/mailman/listinfo/users
_______________________________________________
users mailing list
[email protected]
http://lists.scilab.org/mailman/listinfo/users
_______________________________________________
users mailing list
[email protected]
http://lists.scilab.org/mailman/listinfo/users