Based on previous messages in this thread, I'm going
to throw in my 2 cents...  (and avoid a lot of
quoting!)

While the reporting of partial residues at specific
points, say every 10% (or 5% over 20M), isn't a bad
idea, it won't necessarily save a whole lot of time.

The only thing this scheme would accomplish is to
allow a third test to begin (if a mismatch is found
somewhere along the way) before the second one
finishes.  If the third test did match the first,
there would be savings in discarding the second.
What if, however, the third test matched the second?
Both tests would have to complete, and some safeguard
would have to be a place should the third test complete
before the second.  

This will increase the logistics a lot (but not make
it impossible).  The client would be required to report
most of the information to the server, without receiving
much from the server (a stop, go, or discard maybe).
A checksum figure as part of the work file, as somebody
suggested (shown below), wouldn't work well if the latter
case above occurred (2nd and 3rd test match).

>Double-Check=
>M23780981,64,863FF87,678676AA,FF637BC,[...],CRC:9923FDA.

Much more than the logistics, however, would be the
confusion facter.  This would likely increase
exponentially, especially when attempting to determine
the current overall status of the project.  We might
even have to have George committed eventually, when
he starts running his fingers up and down across his
lips or rubs the side of his head to the bone with his
fingers or palm of his hand... :-(

I think as the time increases for each LL test, there
would be much more time savings in attempting to do
higher trial-factoring.

Let's look at some figures using the following:
 1) We assume to be using a PII 233MHz.
 2) an LL test takes a year (around 19.3M)
    (AND HEY! This is 4 to 5 years away!!)
 3) current trial-factoring to 2^64 takes 2 days if
    no factor is found.
 4) currently, only about 13% of exponents are actually
    factored, with the rest requiring an LL test.
 5) 2 LL tests are done for each exponent (an
    original and a double-check) 

Current trial-factoring saves 12.5% of the total time
that would be spent without any trial-factoring.

If we were to trail-factor to say, 2^70 (which should
cause factor time to increase to 1 month), without any
additional improvement to exponents being factored this
will cause a drop to about 8.8% overall time saved.

However, for each % improvement in exponents being
factored, overall time saved would increase by a little
over 1%.  As a result, if we were to improve to 20% of
the exponents being factored, we would save close to 16%
of the overall time spent without any factoring.

Of course, even more time would be saved by factoring,
if an exponent that would have been factored had it been
trial-factored to 2^70, was to be determined to need a
triple-check during LL testing because it wasn't factored
during a trail-factor process to 2^64

In conclusion, even with a new algorithm that halves
the time for an LL test, 1 month to factor an exponent
still beats a year to perform an LL test and its
corresponding double-check. In addition, it should be
easier to speed up the factoring process than the LL
testing process.


_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to