> > The probability at the top of the range is presumably such  that the cost
> > in time of an erroneous LL test balances the extra time needed for the
> > next FFT size, for which the chance of error is small.
> 
> I think (and hope) the default setup is a bit more conservative than that. 

I guess it is too.
The "break even" approach  suggests tolerating up to ~30% error rates.
This would be an incentive to double check more promptly.

> There's the unfortunate possibility of someone running a first LL test on an 
> exponent and incorrectly finding it composite due to an avoidable error - we 
> want this to be "reasonably improbable" even though double-checking would 
> eventually find the mistake!

I'm not sure how "unfortunate" such an occurence would be.
(1.8% I believe)
The down side is the long straggly tail of exponents which have not been
double checked.

Colquittand Welsh's out of sequence discovery of M110503 in 1988 gave me a nice 
target:
I proved it was the 29th Mersenne using high school multiplication on a K6
in about 6 months. Quite a coup for Moore's Law I think!

I think the discovery of another prime with less tha 10,000,000 digits would
be more fun than the award of the $100,000.

David Eddy


> 
> Regards
> Brian Beesley

_________________________________________________________________
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
_______________________________________________
Prime mailing list
[email protected]
http://hogranch.com/mailman/listinfo/prime

Reply via email to