On Thursday, 19 May 2016 at 11:00:31 UTC, Ola Fosheim Grøstad wrote:
On Thursday, 19 May 2016 at 08:37:55 UTC, Joakim wrote:
On Thursday, 19 May 2016 at 08:28:22 UTC, Ola Fosheim Grøstad wrote:
On Thursday, 19 May 2016 at 06:04:15 UTC, Joakim wrote:
In this case, not increasing precision gets the more accurate result, but other examples could be constructed that _heavily_ favor increasing precision. In fact, almost any real-world, non-toy calculation would favor it.

Please stop saying this. It is very wrong.

I will keep saying it because it is _not_ wrong.

Can you then please read this paper in it's entirety before continuing saying it. Because changing precision breaks properties of the semantics of IEEE floating point.

What Every Computer Scientist Should Know About Floating-Point Arithmetic

https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html#3377

«Conventional wisdom maintains that extended-based systems must produce results that are at least as accurate, if not more accurate than those delivered on single/double systems, since the former always provide at least as much precision and often more than the latter. Trivial examples such as the C program above as well as more subtle programs based on the examples discussed below show that this wisdom is naive at best: some apparently portable programs, which are indeed portable across single/double systems, deliver incorrect results on extended-based systems precisely because the compiler and hardware conspire to occasionally provide more precision than the program expects.»

The example he refers to is laughable because it also checks for equality.

The notion that "error correction" can fix the inevitable degradation of accuracy with each floating-point calculation is just laughable.

Well, it is not laughable to computer scientists that accuracy depends on knowledge about precision and rounding... And I am a computer scientists, in case you have forgotten...

Computer scientists are no good if they don't know any science.

Reply via email to