Richard Otter wrote:
{snip}
> M10068757 Roundoff warning on iteration 9803267 maxerr = 0.437500000000
> M10068757 is not prime. Res64: AEED24F91193EE6F. Program: E2.7z
Your result is in all likelihood fine - I've done double-checks that
gave lots more roundoff errors of the 0.4375 variety and still gave
the correct result. Yes, 0.4375 is quite close to the fatal 0.5 level,
but the reason it's highly unlikely one of those 0.4375's was really
a 0.5625 which was aliased (by the error detection algorithm) to a 0.4375
is as follows:
a) large errors like the above are the result of large convolution
coefficients during the FFT-based sqauring. A large significant (non-error)
part of the convolution coefficient means that any accumulated rounding
errors will collect in the least-significant few bits of the floating-point
mantissa. That's why errors close to 0.5 tend to come in the form of
(integer)/(small power of 2).
b) Especially for large runlengths (and after the first few hundred iterations
or so), rounding errors tend to be randomly distributed in an approximately
Gaussian fashion, and the distribution is essentially identical from one
iteration to the next. That means that if one of those 0.4375's was really
a 0.5625 to a 0.4375, you'd very likely have seen a 0.5000 (which would have
caused the test of that exponent to be halted) at some point. Statistically,
for a run like yours where errors above 0.4 are few, an error of 0.5625 is
going to be rather rarer than a 0.50000, i.e. you'd be quite unlikely to
get an error of 0.5625 (which would evade detection) but not see any 0.5000's
(which would not.)
> I am running Mlucas_2.7z.ev4 on a Compaq Alpha machine.
For you to see any roundoff errors for p ~ 10.07M seems unusual (the other
such cases I've seen are all at 10.08 or above), so I'm guessing that you're
running under DEC Unix 4.0. In 4.0, there's a bug in the real*16 trig
library which Mlucas uses for sincos tables - for angular arguments
of the form theta = n*pi/2 +- pi/512, the sin and cos values can be
incorrect. I first encountered this 1 1/2 years ago and built in a
workaround for it - the code compares the real*16 values and real*8
and if the difference exceeds an error threshold, uses the latter.
This costs a small amount of accuracy, but when you're really close
to the default exponent limit that's all it takes.
Also, I suggest you upgrade to v2.7a at your earliest convenience;
this will allow the program to continue on to the next exponent in
the worktodo.ini file if the current run quits due to a fatal error
(see below). 2.7a also supports runlengths of 288K and 576K (whereas
2.7z jumps to 320K and 640K, respectively), which means less of a
timing jump when your exponents cross the 5.15M and 10.11M thresholds.
* * * *
WHAT TO DO IF YOU GET A FATAL ERROR MESSAGE:
Several users have reported encountering fatal convolution errors
while testing exponents close to the default limits for FFT lengths 256K
and 512K. If this happens using v2.7z, the program will simply quit.
Using v2.7a, the program will simply halt the test of the current
exponent and move on to the next one in the worktodo.ini file.
In either case, the previous-checkpoint savefiles for the halted
exponent will still be there, and I have made available a higher-accuracy
(and slightly slower) version of the 2.7a code which should enable you to
safely finish these runs. Check out
ftp://209.133.33.182/pub/mayer/README.html
Sorry about any inconvenience this may cause - as more people use the
program, I'll be able to refine the runlength/exponent breakpoints based
on observed error behavior.
Cheers,
-Ernst
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers