According to the standard (http://dlang.org/spec/float.html), the compiler is allowed to compute any floating-point statement in a higher precision than specified. Is there a way to deactivate this behaviour?

Context (reason why I need this): I am building a "double double" type, which essentially takes two 64-bit double-precision numbers to emulate a (nearly) quadruple-precision number. A simplified version looks something like this:

struct ddouble
{
    double high;
    double low;

    invariant
    {
        assert(high + low == high);
    }

    // ...implemententations of arithmetic operations...
}

Everything works fine at run-time, but if I declare a compile-time constant like

enum pi = ddouble(3.141592653589793116e+00, 1.224646799147353207e-16);

the invariant fails because it is evaluated using 80-bit "extended precision" during CTFE. All arithmetic operations rely on IEEE-conform double-precision, so everything breaks down if the compiler decides to replace them with higher precision. I am currently using LDC on 64-bit-Linux if that is relevant.

(If you are interested in the "double double" type, take a look here:
https://github.com/BrianSwift/MetalQD
which includes a double-double and even quad-double implementation in C/C++/Fortran)

Reply via email to