Tim Peters added the comment:

I'm guessing this is a "double rounding" problem due to gcc not restricting an 
Intel FPU to using 53 bits of precison:

> In binary, (2**53-1)/2**53 * 2049 is:
>
> 0.11111111111111111111111111111111111111111111111111111
> times
> 100000000001.0
>
> which is exactly:
>
> 100000000000.11111111111111111111111111111111111111111 011111111111

The internal Intel "extended precision" format has 64 bits in the mantissa.  
The last line there has 65 bits in all (53 to the left of the blank, and 12 to 
the right).  Rounding _that_ to fit in 64 bits throws away the rightmost "1" 
bit, which is "exactly half", and so nearest/even rounding bumps what's left up 
by 1, leaving the 64-bit:

100000000000.11111111111111111111111111111111111111111 10000000000

in the extended-precision register.  Rounding that _again_ to fit in 53 bits 
then yields the observed

100000000001.0

result.  No int i with 0 < i < 2049 produces the same kind of double-rounding 
surprise.

And with that I have to bow out - people have spent many years arguing with gcc 
developers about their floating-point decisions, and they rarely budge.  Why 
does -m32 have something to do with it?  "Just because" and "tough luck" are 
typically the only responses you'll get ;-)

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue24546>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to