>       This idea is rather obvious, but I don't remember anybody
> mentioned it.

This idea is rather obvious, and no, I don't remember seeing it either.

>       The schema (exponent, residue) were good when a double
> check last a few days. But now, a lot of time can be saved on triple
> checks if the partial residue of every X iterations were logged. X
> could be a million or so. To avoid hard disk and network traffic
> bloating, take only 16 bits. The final residue would be as always, of
> course.
>
>       When a discrepance is found, the double check stops (and
> starts another exponent) while other machine (with different
> sofware?) begins the triple check up the offending point, where we
> knows (hopefully) who made the mistake. If was the first check, the
> double check machine continues the LLTest were it stopped, using
> the intermediate files. If the double check machine is wrong, the
> triple check now turns to be a double check.
>
>       Furthermore, it's reasonable to think that most errors on
> LLTests occurs early, due to buggy hardware. So, the average time-
> saving would be more than 50% for faulty results.

I think the idea has definite merit.  If an error does occur, it's equally
likely to happen at any step along the way, statistically.  Errors are every
bit as likely to happen on the very first iteration as they are during the
50% mark, or the 32.6% mark, or on the very last iteration.

Especially as the exponents get larger and larger, I see a *definite*
possibility to reduce double check times by having first time LL tests
report residues at certain "percentages" along the way.

Just for example, every 10% along the way, it'll send it's current residue
to the Primenet server.

During the double check of this exponent, it's residues along the way are
compared to the residues from the first time check (which could presumably
be "checked out" along with the exponent itself).

What happens when there's a mismatch?  Well, first of all you've saved
yourself, I suppose on average, 50% of the time needed to run a full test.
Sometimes you'll notice a mismatch in the first 10% of the iterations,
saving alot of time, sometime you might not notice until the very last
iteration, but you get the idea.

Now, the question is, what do you do when there is a mismatch?  I'd guess
that the current double-check be put on hold, reported back to Primenet, and
reassigned to a different person for a "triple check" (different person
ensures a different machine runs the 3rd test...maybe not necessary, but a
darn good idea IMO).  It will check up to the point of mismatch, see which
one (the 1st or 2nd) it agrees with (perhaps neither...quadruple check
time), then continue on.

If the 3rd check agrees with the 1st, then the 3rd machine should finish up
and make sure there were no more errors in the 1st check.  However, if the
3rd check agrees with the 2nd check instead, then both the 2nd and 3rd
checkers should finish all iterations and check for total agreement.

Maybe I'm overcomplicating things (definitely), but that's a rough guess as
to how it might work.

Obviously, this needs thinking, and Primenet would need to handle this sort
of stuff, along with the clients.

But just think how much time could be saved when both first and second tests
disagree.  If all goes well, the 1st and 2nd tests will match and it took no
longer than it takes now.  But in those sort of rare cases where a 3rd test
is needed, you've saved ALOT of time by finding the problem early on.

Like I said, as the exponents get larger, the payoff for doing this, in
terms of CPU time, will MORE than make up for the hassles of reorganizing
how Primenet and the clients work.

Agree?  Disagree?  Comments would be nice.

I like the idea, personally.  Good thinking Oscar.

Aaron

_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to