So, it appears that the one ending with 21 is the likely correct result; as
opposed to 22.

Is this an issue with some underlying library (glibc etc.); or an issue
with GHC itself?


On Sat, Jul 12, 2014 at 8:03 PM, GHC <ghc-devs@haskell.org> wrote:

> #9304: Floating point woes; Different behavior on Mac vs Linux
> -------------------------------------+------------------------------------
>         Reporter:  lerkok            |            Owner:
>             Type:  bug               |           Status:  new
>         Priority:  high              |        Milestone:
>        Component:  Compiler          |          Version:  7.8.3
>       Resolution:                    |         Keywords:  floating point
> Operating System:  Unknown/Multiple  |     Architecture:  Unknown/Multiple
>  Type of failure:  None/Unknown      |       Difficulty:  Unknown
>        Test Case:                    |       Blocked By:  9276
>         Blocking:                    |  Related Tickets:
> -------------------------------------+------------------------------------
>
> Comment (by carter):
>
>  I ran it on a 64 bit linux server i have using ghci
>  {{{
>  Prelude> :set -XScopedTypeVariables
>  Prelude> let x :: Double = -4.4
>  Prelude> let y :: Double = 2.4999999999999956
>  Prelude> decodeFloat (x*y)
>  (-6192449487634421,-49)
>
>  }}}
>
>  so if anything, it looks like its 32bit vs 64bit
>
>  could you try running the above snippet in GHCi on your  32bit machine?
>
> --
> Ticket URL: <http://ghc.haskell.org/trac/ghc/ticket/9304#comment:4>
> GHC <http://www.haskell.org/ghc/>
> The Glasgow Haskell Compiler
>
_______________________________________________
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs

Reply via email to