On Thursday, 19 May 2016 at 08:28:22 UTC, Ola Fosheim Grøstad
wrote:
On Thursday, 19 May 2016 at 06:04:15 UTC, Joakim wrote:
In this case, not increasing precision gets the more accurate
result, but other examples could be constructed that _heavily_
favor increasing precision. In fact, almost any real-world,
non-toy calculation would favor it.
Please stop saying this. It is very wrong.
I will keep saying it because it is _not_ wrong.
Algorithms that need higher accuracy need error correction
mechanisms, not unpredictable precision and rounding.
Unpredictable precision and rounding makes adding error
correction difficult so it does not improve accuracy, it harms
accuracy when you need it.
And that is what _you_ need to stop saying: there's _nothing
unpredictable_ about what D does. You may find it unintuitive,
but that's your problem. The notion that "error correction" can
fix the inevitable degradation of accuracy with each
floating-point calculation is just laughable.