This is actually a bit more subtle than you'd think.  Are those constants
precise and exact?  (There's certainly floating point code that exploits
the cancellations in the floating point model) There's many floating point
computations that can't be done with exact rational operations.  There's
also certain aspects that are target dependent like operations having 80bit
vs 64bit precision. (Ie using the old intel fp registers vs sse2 and newer)

What's the ticket you're working on?


Please be very cautious with floating point, any changes to the meaning
that aren't communicated by the programs author could leave a haskeller
numerical analyst scratching their head.  For example, when doing these
floating point computations, what rounding modes will you use?

On Monday, January 13, 2014, Kyle Van Berendonck wrote:

> Hi,
>
> I'm cutting my teeth on some constant folding for floats in the cmm.
>
> I have a question regarding the ticket I'm tackling:
>
> Should floats be folded with infinite precision (and later truncated to
> the platform float size) -- most useful/accurate, or folded with the
> platform precision, i.e. double, losing accuracy but keeping consistent
> behaviour with -O0 -- most "correct"?
>
> I would prefer the first case because it's *much* easier to implement than
> the second, and it'll probably rot less.
>
> Regards.
>
_______________________________________________
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs

Reply via email to