> This scheme makes almost no sense for normal double checking.
> This is becuase
> it would save *no* time at all. Think about it, even if you
> identify that an
> error ocurred in the second week of a 3-month test, you still
> have to run it
> to completion, and a third test must also be run. (So 3 LL tests
> must still
> be run if an error ocurrs).
It won't make "normal" double checks run any faster, true. What it will do
is save a good amount of time for those instances when the double check
doesn't match the first time check.
Example, a first time LL test is run, spitting out (partial) residues every
10% along the way.
A double check is running and at 30% along, it notices that it's partial
residue does NOT match the one from the first check. We now know *MUCH*
sooner that something is awry, than if we only checked at the end. How does
this save us time? Well, it depends.
Once the mismatch is noticed, the exponent is assigned to someone else for a
double-check, while the double-check currently in progress is put on hold.
Once the "triple-check" gets to the same 30% mark, it can compare it's
partial residue to that of the 1st and 2nd checks.
a) The 3rd check matches the first check. The one running the double-check
on hold is cancelled and the one doing the 3rd check will finish up.
-or
b) The 3rd check matches the second check. Both the double and triple
checks run to completion and the final residues compared.
Hopefully, if the 2nd and 3rd checks run to completion, they'll agree with
each other at the end as well. I suppose it is even possible that if a 2nd
and 3rd check run simultaneously, there's the chance that one or the other
will have an error some time *after* that point where they did happen to
agree. Oh well...time for a 4th check, but those should be pretty rare.
If I recall, there are rare occassions even now when 4 checks have been
necessary? Maybe not? I know that some exponents have taken 3 tests to
resolve.
We ONLY really save any time when:
1) The first check is correct, and
2) The second check is incorrect, and
3) The second check noticed a discrepancy somewhere before the last
iteration
This doesn't happen often, but as has been pointed out, the number of errors
are likely to increase as exponents get larger. And runtimes per exponent
also get much longer. The time saved could be months of CPU time.
At worst, it's at least no *slower* than the way we do it now, and at best,
it saves a lot of time when triple checks become necessary.
I think it's worth it, but that's my opinion. Some more formal estimates of
how long exponents take to calculate, the probable error rate, etc. should
be done to really say how much time could be saved, but I think in the final
analysis, the work involved to set this up would be well worth it,
especially when we move into exponents larger than 20M, or for primes going
into the deca-megadigit range (2^33M-1 and beyond).
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers