On Monday, 16 May 2016 at 19:38:10 UTC, Joakim wrote:
Regarding floating-point, I'll go farther than you and say that if an algorithm depends on lower-precision floating-point to be accurate, it's a bad algorithm.

No. In system level programming a good algorithm is an effective and efficient algorithm on the hardware at hand.

A good system level programming language give you control at the hardware level. Including the floating point unit, in the language itself.

There are lots of algorithms that will break if you randomly switch precision of different expressions.

Heck, nearly all of the SIMD optimizations on differentials that I use will break if one lane is computed with a different precision than the other lanes. I don't give a rats ass about increased precision. I WANT THE DAMN LANES TO BE IN THE SAME PHASE (or close to it)!! Phase-locking is much more important than value accuracy.

Or to put it differently: It does not matter if all clocks are too slow, as long as they are running at the same speed. It is a lot worse if some clocks are too slow and others are too fast. That would lead to some serious noise in a time series.

Of course, no hardware vendor is crazy enough to randomly switch precision on their ALU. Hardware vendors do understand that this _WILL_ lead disaster. Sadly many in the D community don't. Presumably because they don't actually try to write performant floating point code, they are not system level programmers.

(Btw, many error correcting strategies break too if you randomly switch precision.)

Now, people can always make mistakes in their implementation and unwittingly depend on lower precision somehow, but that _should_ fail.

People WILL make mistakes, but if you cannot control precision then you cannot:

1. create a reference implementation to compare with

2. unit test floating point code in a reliable way

3. test for convergence/divergence in feedback loops (which can have _disastrous_ results and could literally ruin your speakers/hearing in the case of audio).

None of this is controversial to me: you shouldn't be comparing floating-point numbers with anything other than approxEqual,

I don't agree.

1. Comparing typed constants for equality should be unproblematic. In D that is broken.

2. Testing for zero is a necessity when doing division.

3. Comparing for equality is the same as subtraction followed by testing for zero.

So, the rule is: you shouldn't compare at all unless you know the error bounds, but that depends on WHY you are comparing.

However, with constants/sentinels and some methods you do know... Also, with some input you do know that the algorithm WILL fail for certain values at a _GIVEN_ precision. Testing for equality for those values makes a lot of sense, until some a**hole decides to randomly "improve" precision where it was typed to something specific and known.

Take this:

f(x) = 1/(2-x)

Should I not be able to test for the exact value "2" here? I don't see why "1.3" typed to a given precision should be different. You want to force me to a more than 3x more expensive test just to satisfy some useless FP semantic that does not provide any real world benefits whatsoever?


increasing precision should never bother your algorithm, and a higher-precision, common soft-float for CTFE will help cross-compiling and you'll never notice the speed hit.

Randomly increasing precision is never a good idea. Yes, having different precision in different code paths can ruin the quality of both rendering, data analysis and break algorithms.

Having infinite precision untyped real that may downgrade to say 64 bits mantissa is acceptable. Or in the case of Go, a 256 bit mantissa. That's a different story.

Having single precision floats that randomly are turned into arbitrary precision floats is not acceptable. Not at all.

Reply via email to