--- Comment #8 from Walter Bright <> 2012-02-14 
12:02:13 PST ---
The floating point rules in D are written with the following principle in mind:

"An algorithm is invalid if it breaks if the floating point precision is
increased. Floating point precision is always a minimum, not a maximum."

I believe (although I don't have proof) this is a sound principle. Programs
I've seen that legitimately depended on maximum precision were:

1. Compiler/library validation test suites
2. ones trying to programmatically test the precision

(1) is not of value to user programming, and there are alternate ways to test
the precision.

(2) D has .properties that take care of that.

What legitimate algorithm would require sloppy precision? Would you want a
speedometer in your car that was less accurate? Cut a piece of metal to a less
accurate length? Put a less accurate amount of milk in the carton? A less
accurate autopilot? A square root further from the correct value?

Programs that rely on a maximum accuracy need to be rethought.

It reminds me of back when I worked in electronics. The reality of digital
chips is they got faster every year (signal propagation delay). Hence, the
golden rule in digital circuit design is to never, ever rely on a maximum
propagation speed. Only rely on minimum speeds. Next year, you might not be
able to get the slower parts anymore.

Configure issuemail:
------- You are receiving this mail because: -------

Reply via email to