On Monday, 5 February 2024 at 16:45:03 UTC, Dom DiSc wrote:
Why is real.sizeof == 16 on x86-systems?!?
Its the IEEE 754 extended format: 64bit mantissa + 15bit exponent + sign.
It should be size 10!
I mean, alignment may be different, but why wasting so much memory even in arrays?

According to the language spec, `real` is the ["largest floating point size available"][1]. This means that on some systems, it's actually an IEEE 754 128-bit quadruple-precision float, not an x87 80-bit extended-precision float.

You can verify this by compiling the following test program:

    pragma(msg, "real is ", cast(int) real.sizeof*8, " bits");
    pragma(msg, "real has a ", real.mant_dig, "-bit mantissa");

On my laptop (Linux, x86_64), compiling this program with `dmd -c` prints

    real is 128 bits
    real has a 64-bit mantissa

[1]: https://dlang.org/spec/type.html#basic-data-types
  • Re: real.sizeof Paul Backus via Digitalmars-d-learn

Reply via email to