> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
...
> George rightly points out how tricky trig functions are.  My own
> favourite curious operation is subtraction:
> 
> Prelude> 1.0 - 0.8 - 0.2
> -1.49012e-08

Yes, floating point arithmetic is fascinating, isn't it! ;-)

The underlying reason for this 'residue', both for single
precision and a smaller one when using double precision is
multiple:
        1. The radix used by the implementation is not 10, it's 2.
        2. 0.8 and 0.2 are not exactly representable in a finite,
           or indeed bounded, number of radix 2 digits.
        3. The operation of subtraction itself may be inexact.

An implementation of IEEE 754, to the letter, by default sets
the 'inexact' indicator for 1.0 - 0.8 - 0.2.  (I know of no
way of 'reading' that indicator in Haskell, but C99 does
provide that.)

Now, if the radix had been 10, as allowed by IEEE 854, 
both 0.8, and 0.2 (and, as 'always' 1.0) had been exactly
representable, and for this particular example, the
subtractions would be exact too.  Thus no setting of the
'inaxact' bit on the account of that expression, and the
result would have been 0 (exactly).  (No, not -0 for
this expression.)

On the other hand, 1) radix 10 gives less resulting
total accuracy (due to the larger ULP gaps) than radix 2,
and 2) using radix 10 results in slower floating point
arithmetic. Hence, it's not popular.


                Kind regards
                /kent k

Reply via email to