On Thursday, 23 June 2016 at 13:57:57 UTC, Steven Schveighoffer wrote:
Whenever you work with floating point, the loss of precision must be expected -- a finite type cannot represent an infinite precision number.

The loss in precision should still be a warning. If I am using reals then I obviously needed a certain level of precision, I don't want to accidentally lose that precision somewhere because the compiler decided it was not important enough to warn me about it.

Reply via email to