On Tuesday, 29 October 2013 at 19:42:08 UTC, Walter Bright wrote:
On 10/29/2013 5:08 AM, Don wrote:
On Wednesday, 23 October 2013 at 16:50:52 UTC, Walter Bright wrote:
On 10/23/2013 9:22 AM, David Nadlinger wrote:
On Wednesday, 23 October 2013 at 16:15:56 UTC, Walter Bright wrote:
A D compiler is allowed to compute floating point results at arbitrarily large precision - the storage size (float, double, real) only specify the minimum
precision.

This behavior is fairly deeply embedded into the front end, optimizer, and
various back ends.

I know we've had this topic before, but just for the record, I'm still not sold on the idea of allowing CTFE to yield different results than runtime execution.

Java initially tried to enforce a maximum precision, and it was a major disaster for them. If I have been unable to convince you, I suggest reviewing
that case history.

Back when I designed and built digital electronics boards, it was beaten into my skull that chips always get faster, never slower, and the slower parts routinely became unavailable. This means that the circuits got designed with maximum propagation delays in mind, and with a minimum delay of 0. Then, when they work with a slow part, they'll still work if you swap in a faster one.

FP precision is the same concept. Swap in more precision, and your correctly
designed algorithm will still work.


THIS IS COMPLETELY WRONG. You cannot write serious floating-point code under such circumstances. This takes things back to the bad old days before IEEE,
where results were implementation-dependent.

We have these wonderful properties, float.epsilon, etc, which allow code to adapt to machine differences. The correct approach is to write generic code which will give full machine precision and will work on any machine
configuration. That's actually quite easy.

But to write code which will function correctly when an unspecified and unpredictable error can be added to any calculation -- I believe that's
impossible. I don't know how to write such code.

Unpredictable, sure, but it is unpredictable in that the error is less than a guaranteed maximum error. The error falls in a range 0<=error<=epsilon. As an analogy, all engineering parts are designed with a maximum deviation from the ideal size, not a minimum deviation.

I don't think the analagy is strong. There's no reason for there to be any error at all.

Besides, in the x87 case, there are exponent errors as well precision. Eg, double.min * double.min can be zero on some systems, but non-zero on others. This causes a total loss of precision.

If this is allowed to happen anywhere (and not even consistently) then it's back to the pre-IEEE 754 days: underflow and overflow lead to unspecified behaviour.

The idea that extra precision is always a good thing, is simply incorrect.

The problem is that, if calculations can carry extra precision, double rounding can occur. This is a form of error that doesn't otherwise exist. If all calculations are allowed to do it, there is absolutely nothing you can do to fix the problem.

Thus we lose the other major improvement from IEEE 754: predictable rounding behaviour.

Fundamentally, there is a primitive operation "discard extra precision" which is crucial to most mathematical algorithms but which is rarely explicit. In theory in C and C++ this is applied at each sequence point, but in practice that's not actually done (for x87 anyway) -- for performance, you want to be able to keep values in registers sometimes. So C didn't get this exactly right. I think we can do better. But the current behaviour is worse.

This issue is becoming more obvious in CTFE because the extra precision is not merely theoretical, it actually happens.

Reply via email to