George Woltman wrote: 

>   I went through my log files and have requeued as a first-time LL
>test any exponent that had "serious" errors on the first LL test.  A serious
>error is a "SUM(INPUTS) != SUM(OUTPUTS)" or "ROUNDOFF > 0.4".  I'd guess
>that the first test has very roughly a 50% chance of being correct.

One can actually arrive at a quantitative estimate of the probability of
the result being incorrect - such a method is described in the manuscript
www.perfsci.com/free/techpapers/F24.ps. Briefly, if one models the rounding
errors in a floating-point convolution (DWT-based or not) using random-walk
statistics, one can easily derive a relation that predicts the maximum
allowable number of bits per residue digit (there is an undetermined
constant which can be fixed by using empirical error data, but it's
around 1 if IEEE 64-bit floating-point arithmetic is used). In addition,
the model allows one to answer such questions as "if the average error
(fractional part, defined as frac(x) = |x-nint(x)| ) in an output digit
is E << 1, what is the probability that there was an error >= 0.5?"

Note that the above estimate requires knowledge of the AVERAGE error, not
the maximum fractional part (call it fracmax), the latter being the "> 0.4"
George mentions. But it's easy to add a few lines to one's carry propagation
routine to calculate E. The other nice property of the average error is that
(unlike fracmax) for a Mersenne number M(p), E levels off after only about
log2(p) iterations and varies very little after that. In statspeak, measuring
E corresponds to sampling the fat part of the normal distribution, whereas
measuring fracmax corresponds to sampling the tails, which is inherently
less reliable in terms of, e.g. fixing FFT length exponent breakpoints.
(Although one should still calculate fracmax every iteration during an
actual run to be safe.)

The one caveat of using a normal-distribution model is that the asymptotic
estimate of the probability of a fatal error used in the aforementioned
paper requires E << 1; if E is, say, greater than around 0.05, one might
need to include more terms in the asymptotic series for the complementary
error function. Since rounding errors can take only discrete values (and
the number of these discrete values becomes very small when for fractional
errors near 0.5), the tails of such distributions may no longer be well-
approximated by a Gaussian. But it's still better than mere handwaving.

Cheers,
-Ernst

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to