On 5/10/2016 12:31 AM, Manu via Digitalmars-d wrote:
Think of it like this; a float doesn't represent a precise point (it's
an approximation by definition), so see the float as representing the
interval from the absolute value it stores, and that + 1 mantissa bit.
If you see float's that way, then the natural way to compare them is
to demote to the lowest common precision, and it wouldn't be
considered erroneous, or even warning-worthy; just documented
behaviour.

Floating point behavior is so commonplace, I am wary of inventing new, unusual semantics for it.

Reply via email to