On Fri, Jun 27, 2014 at 12:51 PM, Steven D'Aprano
> Although you seem to have missed the critical issue: this is a failure
> mode which *binary floats cannot exhibit*, but decimal floats can. The
> failure being that
> assert x <= (x+y)/2 <= y
> may fail if x and y are base 10 floats.
No, I didn't miss that; I said that what you were looking at was
*also* caused by intermediate rounding. It happens because .516 + .518
= 1.034, which rounds to 1.03; half of that is .515, which is outside
of your original range - but the intermediate rounding really reduced
the "effective precision" to two digits, by discarding some of the
information in the original. If you accept that your result is now
accurate to only two digits of precision, then that result is within
one ULP of correct (you'll record the average as either .51 or .52,
and your two original inputs are both .52, and the average of .52 and
.52 is clearly .52).
But you're right that this can be very surprising. And it's inherent
to the concept of digits having more range than just "high" or "low",
so there's no way you can get this with binary floats.