#4381: scaleFloat does not handle overflow correctly.
---------------------------------+------------------------------------------
    Reporter:  draconx           |        Owner:  simonmar                   
        Type:  bug               |       Status:  patch                      
    Priority:  normal            |    Milestone:                             
   Component:  libraries/base    |      Version:  6.12.3                     
    Keywords:                    |     Testcase:                             
   Blockedby:                    |   Difficulty:                             
          Os:  Unknown/Multiple  |     Blocking:                             
Architecture:  Unknown/Multiple  |      Failure:  Incorrect result at runtime
---------------------------------+------------------------------------------

Comment(by daniel.is.fischer):

 The default implementation is
 {{{
 scaleFloat k x      =  encodeFloat m (n+k)
                        where (m,n) = decodeFloat x
 }}}
 The second component of decodeFloat lies between (TYP_MIN_EXP -
 2*TYP_MANT_DIG) and TYP_MAX_EXP - I don't know the exact implementation of
 decodeFloat, so the bounds may be inclusive or exclusive. For a nonzero
 mantissa, encodeFloat returns ±Infinity if the exponent is greater than
 (TYP_MAX_EXP - 1) and - for a mantissa resulting from decodeFloat - the
 result is 0 if the exponent is smaller than TYP_MIN_EXP - 2*TYP_MANT_DIG.

 The exponent is apparently treated as a 32-bit int, thus passing large k
 to scaleFloat on 64-bit systems leads to overflow and incorrect results
 even when the (n+k) addition doesn't overflow on the Haskell side. In any
 case, if (n+k) over- or underflows on the Haskell side, the result is
 wrong.

 The solution is to cut off the scale parameter k so that (n+k) doesn't
 overflow in 32-bit int and that the result of (n + clamp k) is larger than
 TYP_MAX_EXP if (n+k) is, and smaller than (TYP_MIN_EXP - 2*TYP_MANT_DIGS)
 if (n+k) is.

 So the range to which the scale parameter has to be mapped must be at
 least TYP_MAX_EXP - TYP_MIN_EXP + 2*TYP_MANT_DIGS.

 For Double, that is 1024 - (-1021) + 2*53 = 2151, for Float it's 301,
 assuming base-2 IEEE types.
 2500 is just an arbitrary magic number large enough for Double and Float.

 However, I didn't think it through properly, so a) there's the possibility
 of a platform with different Double and Float types (unlikely, but
 possible) and worse, b) it won't go through to a newtype around
 Double/Float with a manual !RealFloat instance if the default method for
 scaleFloat is used.

 Hence let me do it properly, fixing also the default method and using type
 specific clamping bounds (I trust GHC will constant-fold the values
 statically known at compile-time for simple Int-arithmetic).

-- 
Ticket URL: <http://hackage.haskell.org/trac/ghc/ticket/4381#comment:5>
GHC <http://www.haskell.org/ghc/>
The Glasgow Haskell Compiler
_______________________________________________
Glasgow-haskell-bugs mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs

Reply via email to