David Brown wrote:

I don't think his point came across very well, but it is valid.
Understanding floating point is crucial to be able to successfully program
using them.  I had an entire course on numerical methods, a noticeable part
was understanding the nuances of what floating point means.  Not
understanding this can result in error propagation dominating a
computation.

Correct. The issue here is limited precision. Understanding chained arithmetic in the presence of limited precision is difficult.

How the floating point is implemented isn't really important, both a
hardware floating point processor and an emulation library are going to
give the same result (or at least should).

Wrong.  Not even wrong.  Worse than wrong.

Almost all of the idiocies with floating point arithmetic are with "limited precision *binary* floating point".

If we were using "limited precision *decimal* floating point", 95% of the stupid problems would go away. It might be higher than 95%, but the important bit is the fact that almost all of our intuition about arithmetic would be correct if we were using decimal floating point. Sure, 1/3 + 2/3 might still have some issues, but most people who know about repeating decimals would get why. However, things like having .1+.1+.1+.1+.1+.1+.1+.1+.1+.1 != 1.0 would not crop up. (For those not in the know, the issue is that 1/10 is a repeating fraction in binary and, as such, is not exactly representable in binary. Consequently 1/10 in binary is slightly larger or slightly smaller than a true decimal 0.1 .)

That's important. There is no reason for binary floating point to be the default in a programming language in this day and age. In fact, the law prevents its use in the financial sector.

I'm not saying we should throw away binary floating point or the ability to access it, but it should no longer be the default.

Correct before fast.

-a

--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to