On Tuesday, 25 August 2015 at 17:40:06 UTC, Steven Schveighoffer
wrote:
I'll note that D does exactly what C does in the case where you
are using 80-bit floating point numbers.
I don't think C specifies how it should be done, but some
compilers have a "precise" compilation flag that is supposed to
retain order and accurate intermediate rounding.
IMO, these two operations should be the same. If the result of
an expression is detected to be double, then it should behave
like one. You can't have the calculation done in 80-bit mode,
and then magically throw away the rounding to get to 64-bit
mode.
Yes, that is rather obvious. IEEE754-2008 go much further than
that, though. It requires that all arithmetic have correct
rounding. Yes, I am aware that the D specification allows higher
precision, but it seems to me that this neither gets you
predictable results or maximum performance. And what is the point
of being able to set the rounding mode if you don't know the bit
width used?
It is a practical issue in all simulations where you want
reproducible results. If D is meant for scientific computing it
should support correct rounding and reproducible results. If D is
meant for gaming it should provide ways of expressing minimum
precision or other ways of loosening the accuracy where needed.
I'm not really sure which group the current semantics appeals to.
I personally either want reproducible or very fast...