Hi guys,

I was thinking that we could probably improve the confidence we have 
in double-checked results if we adopted the following tactic.

Normally we run with the FPU set to round to nearest or even (the 
default mode). Suppose we ran first tests with rounding set to round 
down (towards -infinity) and double checks with rounding set to round 
up (towards +infinity).

This would lose us about 1 bit precision in the mantissa, but, if a 
double check run verified the residual, we'd be _certain_ that 
rounding errors weren't compromising the algorithm.

We would probably have to reduce the FFT size changeover points 
slightly to accomodate the small loss of precision, but would 
probably be able to regain the resulting throughput loss by reducing 
the frequency with which other "sanity checks" were done.

The combination of round up/round down matching and randomized offset 
would surely lead to results obtained using the same program to be 
less likely to match incorrectly. Not that this is a particular 
problem, but we might be able to get the improvement for a very small 
(or possibly even negative) cost in execution time.
 

Regards
Brian Beesley
_________________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.exu.ilstu.edu/mersenne/faq-mers.txt

Reply via email to