Henrique Dante de Almeida [EMAIL PROTECTED] wrote:
Finally (and the answer is obvious). 387 breaks the standards and
doesn't use IEEE double precision when requested to do so.
Actually, the 80387 and the '87 FPU in all other IA-32 processors
do use IEEE 745 double-precision arithmetic when
Dave Parker wrote:
On May 21, 7:01 pm, Carl Banks [EMAIL PROTECTED] wrote:
The crucial thing is not to slow down the calculations with useless
bells and whistles.
Are you running your simulations on a system that does or does not
support the useless bell and whistle of correct rounding?
On May 22, 1:14 am, bukzor [EMAIL PROTECTED] wrote:
On May 21, 3:28 pm, Dave Parker [EMAIL PROTECTED] wrote:
On May 21, 4:21 pm, Diez B. Roggisch [EMAIL PROTECTED] wrote:
Which is exactly what the python decimal module does.
Thank you (and Jerry Hill) for pointing that out. If I want
This person who started this thread posted the calculations showing
that Python was doing the wrong thing, and filed a bug report on it.
If someone pointed out a similar problem in Flaming Thunder, I would
agree that Flaming Thunder was doing the wrong thing.
I would fix the problem a
On May 22, 5:09 am, Ross Ridge [EMAIL PROTECTED]
wrote:
Henrique Dante de Almeida [EMAIL PROTECTED] wrote:
Finally (and the answer is obvious). 387 breaks the standards and
doesn't use IEEE double precision when requested to do so.
Actually, the 80387 and the '87 FPU in all other IA-32
On May 22, 6:57 am, Diez B. Roggisch [EMAIL PROTECTED] wrote:
I wonder how you would accomplish that, given that there is no fix.
http://hal.archives-ouvertes.fr/hal-00128124
Diez
For anyone still following the discussion, I highly
recommend the above mentioned paper; I found it
extremely
On May 22, 6:09 am, Ross Ridge [EMAIL PROTECTED]
wrote:
Henrique Dante de Almeida [EMAIL PROTECTED] wrote:
Finally (and the answer is obvious). 387 breaks the standards and
doesn't use IEEE double precision when requested to do so.
Actually, the 80387 and the '87 FPU in all other IA-32
On SuSE 10.2/Xeon there seems to be a rounding bug for
floating-point addition:
[EMAIL PROTECTED]:~ python
Python 2.5 (r25:51908, May 25 2007, 16:14:04)
[GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2
Type help, copyright, credits or license for more information.
a = 1e16-2.
a
Mark Dickinson schrieb:
On SuSE 10.2/Xeon there seems to be a rounding bug for
floating-point addition:
[EMAIL PROTECTED]:~ python
Python 2.5 (r25:51908, May 25 2007, 16:14:04)
[GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2
Type help, copyright, credits or license for more
On May 21, 11:38 am, Mark Dickinson [EMAIL PROTECTED] wrote:
On SuSE 10.2/Xeon there seems to be a rounding bug for
floating-point addition:
[EMAIL PROTECTED]:~ python
Python 2.5 (r25:51908, May 25 2007, 16:14:04)
[GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2
Type help,
Mark Dickinson [EMAIL PROTECTED] wrote:
On SuSE 10.2/Xeon there seems to be a rounding bug for
floating-point addition:
[EMAIL PROTECTED]:~ python
Python 2.5 (r25:51908, May 25 2007, 16:14:04)
[GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2
Type help, copyright, credits or license
On May 21, 3:22 pm, Marc Christiansen [EMAIL PROTECTED] wrote:
On my system, it works:
Python 2.5.2 (r252:60911, May 21 2008, 18:49:26)
[GCC 4.1.2 (Gentoo 4.1.2 p1.0.2)] on linux2
Type help, copyright, credits or license for more information.
a = 1e16 - 2.; a
9998.0
a +
On May 21, 12:38 pm, Mark Dickinson [EMAIL PROTECTED] wrote:
a+0.999 # gives expected result
9998.0
a+0. # doesn't round correctly.
1.0
Shouldn't both of them give .0?
I wrote the same program under Flaming Thunder:
Set a to
On Wed, May 21, 2008 at 4:34 PM, Dave Parker
[EMAIL PROTECTED] wrote:
On May 21, 12:38 pm, Mark Dickinson [EMAIL PROTECTED] wrote:
a+0.999 # gives expected result
9998.0
a+0. # doesn't round correctly.
1.0
Shouldn't both of them give
On May 21, 2:44 pm, Jerry Hill [EMAIL PROTECTED] wrote:
My understand is no, not if you're using IEEE floating point.
Yes, that would explain it. I assumed that Python automatically
switched from hardware floating point to multi-precision floating
point so that the user is guaranteed to always
2008/5/21 Dave Parker [EMAIL PROTECTED]:
On May 21, 2:44 pm, Jerry Hill [EMAIL PROTECTED] wrote:
My understand is no, not if you're using IEEE floating point.
Yes, that would explain it. I assumed that Python automatically
switched from hardware floating point to multi-precision floating
On Wed, May 21, 2008 at 3:56 PM, Dave Parker
[EMAIL PROTECTED] wrote:
On May 21, 2:44 pm, Jerry Hill [EMAIL PROTECTED] wrote:
My understand is no, not if you're using IEEE floating point.
Yes, that would explain it. I assumed that Python automatically
switched from hardware floating point
Dave Parker schrieb:
On May 21, 2:44 pm, Jerry Hill [EMAIL PROTECTED] wrote:
My understand is no, not if you're using IEEE floating point.
Yes, that would explain it. I assumed that Python automatically
switched from hardware floating point to multi-precision floating
point so that the user
On Wed, May 21, 2008 at 4:56 PM, Dave Parker
[EMAIL PROTECTED] wrote:
On May 21, 2:44 pm, Jerry Hill [EMAIL PROTECTED] wrote:
My understand is no, not if you're using IEEE floating point.
Yes, that would explain it. I assumed that Python automatically
switched from hardware floating point
On May 21, 3:17 pm, Chris Mellon [EMAIL PROTECTED] wrote:
If you're going to use every post and question about Python as an
opportunity to pimp your own pet language you're going irritate even
more people than you have already.
Actually, I've only posted on 2 threads that were questions about
On May 21, 3:19 pm, Dan Upton [EMAIL PROTECTED] wrote:
The fact is, sometimes it's better to get it fast and be good enough,
where you can use whatever methods you want to deal with rounding
error accumulation.
I agree.
I also think that the precision/speed tradeoff should be under user
On Wed, May 21, 2008 at 4:29 PM, Dave Parker
[EMAIL PROTECTED] wrote:
On May 21, 3:17 pm, Chris Mellon [EMAIL PROTECTED] wrote:
If you're going to use every post and question about Python as an
opportunity to pimp your own pet language you're going irritate even
more people than you have
On May 21, 3:41 pm, Chris Mellon [EMAIL PROTECTED] wrote:
When told why you got different results (an answer you
probably already knew, if you know enough about IEEE to do the
auto-conversion you alluded to) ...
Of course I know a lot about IEEE, but you are assuming that I also
know a lot
Dave Parker schrieb:
On May 21, 3:19 pm, Dan Upton [EMAIL PROTECTED] wrote:
The fact is, sometimes it's better to get it fast and be good enough,
where you can use whatever methods you want to deal with rounding
error accumulation.
I agree.
I also think that the precision/speed tradeoff
On May 21, 4:21 pm, Diez B. Roggisch [EMAIL PROTECTED] wrote:
Which is exactly what the python decimal module does.
Thank you (and Jerry Hill) for pointing that out. If I want to check
Flaming Thunder's results against an independent program, I'll know to
use Python with the decimal module.
--
On May 21, 3:28 pm, Dave Parker [EMAIL PROTECTED] wrote:
On May 21, 4:21 pm, Diez B. Roggisch [EMAIL PROTECTED] wrote:
Which is exactly what the python decimal module does.
Thank you (and Jerry Hill) for pointing that out. If I want to check
Flaming Thunder's results against an independent
On May 21, 4:56 pm, Dave Parker [EMAIL PROTECTED] wrote:
On May 21, 2:44 pm, Jerry Hill [EMAIL PROTECTED] wrote:
My understand is no, not if you're using IEEE floating point.
Yes, that would explain it. I assumed that Python automatically
switched from hardware floating point to
On May 21, 7:01 pm, Carl Banks [EMAIL PROTECTED] wrote:
The crucial thing is not to slow down the calculations with useless
bells and whistles.
Are you running your simulations on a system that does or does not
support the useless bell and whistle of correct rounding? If not,
how do you
On May 21, 11:27 pm, Dave Parker [EMAIL PROTECTED]
wrote:
On May 21, 7:01 pm, Carl Banks [EMAIL PROTECTED] wrote:
The crucial thing is not to slow down the calculations with useless
bells and whistles.
Are you running your simulations on a system that does or does not
support the useless
On May 21, 3:38 pm, Mark Dickinson [EMAIL PROTECTED] wrote:
a = 1e16-2.
a
9998.0
a+0.999 # gives expected result
9998.0
a+0. # doesn't round correctly.
1.0
Notice that 1e16-1 doesn't exist in IEEE double precision:
1e16-2 ==
On May 22, 1:26 am, Henrique Dante de Almeida [EMAIL PROTECTED]
wrote:
On May 21, 3:38 pm, Mark Dickinson [EMAIL PROTECTED] wrote:
a = 1e16-2.
a
9998.0
a+0.999 # gives expected result
9998.0
a+0. # doesn't round correctly.
1.0
On May 22, 1:36 am, Henrique Dante de Almeida [EMAIL PROTECTED]
wrote:
On May 22, 1:26 am, Henrique Dante de Almeida [EMAIL PROTECTED]
wrote:
On May 21, 3:38 pm, Mark Dickinson [EMAIL PROTECTED] wrote:
a = 1e16-2.
a
9998.0
a+0.999 # gives expected result
On May 22, 1:41 am, Henrique Dante de Almeida [EMAIL PROTECTED]
wrote:
Notice that 1e16-1 doesn't exist in IEEE double precision:
1e16-2 == 0x1.1c37937e07fffp+53
1e16 == 0x1.1c37937e08p+53
(that is, the hex representation ends with 7fff, then goes to
8000).
So, it's just
33 matches
Mail list logo