On 7/2/2014 1:53 AM, Don wrote:
Definitely, discarding excess precision is a crucial operation. C and C++ tried
to do it implicitly with "sequence points", but that kills optimisation
possibilities so much that compilers don't respect it. I think it's actually
quite similar to write barriers in multithreaded programming. C got it wrong,
but we're currently in an even worse situation because it doesn't necessarily
happen at all.
We need a builtin operation -- and not in std.math, this is as crucial as
addition, and it's purely a signal to the optimiser. It's very similar to a
casting operation. I wonder if we can do it as an attribute? .exact_float,
.restrict_float, .force_float, .spill_float or something similar?
With D's current floating point semantics, it's actually impossible to write
correct floating-point code. Everything that works right now, is technically
only working by accident.
But if we get this right, we can have very nice semantics for when things like
FMA are allowed to happen -- essentially the optimiser would have free reign
between these explicit discard_excess_precision sequence points.
This is easily handled without language changes by putting a couple builtin
functions in druntime - roundToFloat() and roundToDouble().
Ideally, I think we'd have a __real80 type. On x86 32 bit this would be the same
as 'real', while on x86-64 __real80 would be available but probably 'real' would
alias to double. But I'm a lot less certain about this.
I'm afraid that would not only break most D programs, but also interoperability
with C.