At 12:29 AM 1999/07/27 EDT, [EMAIL PROTECTED] wrote:
>Hello, again.  As before, I'm replying to a bunch of different people. And I 
>have trouble thinking up decent subject lines as well. :-]
...
><<Well, the likelyhood that a failure occurs may be 1%, but the likelyhood
>that a double check will not catch it is much lower.  This is do to the
>fact that (barring bugs), the likelyhood that the numbers produce the same
>64-bit residue is very, very low.  Probably somewhere between 2^-64 and 
>2^-128.>>
>
>Ah, a product of my own brain drain. You're right, I stand corrected. So, 
>does the rate of needed triple-checks conform with what a 1% error rate per 
>test would indicate? (i.e. 1% * 1%, I think.)

Something close to that, although the %-per-check is believed to be an
increasing function with p.  One would expect that of forty thousand
LLtests and double checks, a few occurrences of both being wrong for the
same p would occur, and that is what we've found; there were some months
back, questions raised about a short list of exponents which had 3 tests
done and no matches found, requiring (at least) a 4th test.

George Woltman's estimates of about 1 in 100 were posted March 5, 1999, and
made a distinction between V17 error rates and earlier versions.
I wonder if the V17 statistics were affected by the shift count bug
discovered later.

Prior to this there was a thread October 13-15, 1998 about when 
triple-checking fails to produce matching residues.  (Two residues wrong
by random error would produce separate residues not matching a hopefully
correct third residue, necessitating a _fourth_ test to hopefully obtain
a match.)  At that time, for exponents below 2,000,000, there were about 
40,000 double checked exponents which appeared to fit a 1 in 100 
independent error rate.
A plausible distribution for that would be:
about 99%, 39600 where first and second were both right
about .99%, 396 where a third test produced a match
about .01%, a few where triple-testing produced no match
of those last few, likely all would be matched after a fourth LLtest.
One might need a fifth or higher test, but at low odds (~4%)
All those figures are sensitive to the actual error rate, of which 1%
is only an approximation.

><<1) What is the approximate P-90 computing time to test for primality
>for a 1, 10, 100 million (& 1 trillion!) digit Mersenne Primes?>>
.589 seconds/iteration for a 1-million decimal digit mersenne prime,
or about 23 days, assuming the use of prime95.  For the other sizes
of exponents, no released version of prime95 supports running such high
exponents.  Runtimes can be estimated as roughly
p~=#digits/log10(2)
t/t0=  p^2 log(p) / (p0^2 log (p0) )

Then:

digits   P90yr    p90sec/iter
10^6        .062     .589
10^7       7.23     6.87  Hardware failure before completing 1 test is a risk
10^8     826.      78.5 Someone will surely poach the exponent before
completion
10^9   93000.     883.5 (The EFF's big money appears to be safe!)


Ken
[EMAIL PROTECTED]
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to