Richard Henderson wrote:

On Tue, Apr 03, 2007 at 10:56:42AM +0200, Uros Bizjak wrote:
> ...  Note that a change of default precision control may
> affect the results returned by some of the mathematical functions.
>
> to the documentation to warn users about this fact.

Eh.  It can seriously break some libm implementations that
require longer precision.  It's one of the reasons I'm not
really in favour of global switches like this.

I just (re-)discovered these tables giving maximum known errors in some libm functions when extended precision is enabled:

http://people.inf.ethz.ch/gonnet/FPAccuracy/linux/summary.html

and when the precision of the mantissa is set to 53 bits (double precision):

http://people.inf.ethz.ch/gonnet/FPAccuracy/linux64/summary.html

This is from 2002, and indeed, some of the errors in double-precision results are hundreds or thousands of times bigger when the precision is set to 53 bits.

I think the warning in the documentation is very mild considering the possible effects.

Perhaps the manual should also mention that sometimes this option brings a 2% improvement in the speed of FP-intensive code along with massive increases in the error of some libm functions, and then people could decide if they want to use it. (I'm not opposed to a switch like this, my favorite development environment sets the precision to 53 bits globally just as this switch does, but I think the documentation should be more clear about the trade-offs.)

Brad

Reply via email to