On Wednesday, 18 May 2016 at 20:29:27 UTC, Walter Bright wrote:
I do not understand the tolerance for bad results in scientific, engineering, medical, or finance applications.
I don't think anyone has suggested tolerance for bad results in any of those applications.
What _has_ been argued for is that in order to _prevent_ bad results it's necessary for the programmer to have control and clarity over the choice of precision as much as possible.
If I'm writing a numerical simulation or calculation using insufficient floating-point precision, I don't _want_ to be saved by under-the-hood precision increases -- I would like it to break because then I will be forced to improve either the floating-point precision or the choice of algorithm (or both).
To be clear: the fact that D makes it a priority to offer me the highest possible floating-point precision is awesome. But precision is not the only factor in generating accurate scientific results.
