On 10/23/2013 8:44 AM, Apollo Hogan wrote:
That is: without optimization the run-time "normalization" is correct. With optimization it is broken. That is pretty easy to work around by simply compiling the relevant library without optimization. (Though it would be nice to have, for example, pragmas to mark some functions as "delicate" or "non-optimizable".) A bigger issue is that the compile-time normalization call gives the 'wrong' answer consistently with or without optimization. One would expect that evaluating a pure function at run-time or compile-time would give the same result...
A D compiler is allowed to compute floating point results at arbitrarily large precision - the storage size (float, double, real) only specify the minimum precision.
This behavior is fairly deeply embedded into the front end, optimizer, and various back ends.
To precisely control maximum precision, I suggest using inline assembler to use the exact sequence of instructions needed for double-double operations.
