On 5 Aug 99, at 9:05, Eric Hahn wrote:

> Based on previous messages in this thread, I'm going
> to throw in my 2 cents...  (and avoid a lot of
> quoting!)

And I'm doing the same. I'd probably have reacted sooner if I hadn't 
been offline (getting a kidney stone fixed).

Several months ago I suggested a method which does require some work 
on both client and server, but which really does reduce wasted work 
to a minimum. The way it works is as follows:

Assign each exponent initially to two systems A and B. The software 
has a table of checkpoints built in (these can be very sparse for 
smaller exponents, becoming denser as the exponent increases, so as 
to maintain about 1 P90 month between checkpoints).

Suppose A reaches the first checkpoint before B. A stops working on 
the exponent & moves on to some other work already queued. It reports 
the iteration number and the low 64 bits of the residue to the 
PrimeNet server.

When B reaches the first checkpoint, it also stops working on the 
exponent & gets on with something else. It also reports the iteration 
number & the low 64 bits of the residue to the PrimeNet server. The 
PrimeNet server then checks to see if the checkpoint information 
matches that sent in by another system. If it does, it immediately 
sends a message to B instructing B that it can continue with the 
checked exponent up to the next checkpoint. It also tags a message to 
be sent to A instructing A to proceed with the checked exponent, 
which is communicated to A next time A checks into the server.

If B's checkpoint info doesn't match A's, the exponent is issued to 
another system C which calculates to the first checkpoint & check in, 
as before. If we now have two matching residuals, the two systems 
supplying them are told to continue and the system with the "wrong" 
result is told to abandon that exponent. If all three differ, issue 
the exponent to system D ...

The last iteration (completing the run) is treated in the same way as 
an intermediate checkpoint, except that there is no need for the 
PrimeNet server to send a "continue" or "abort" command - though this 
might provide useful feedback to the user, if it was done anyway.

As well as minimising wasted work, this scheme results in very little 
extra server traffic, since the systems all need to check in 
occasionally anyway. If Prime95 is altered so that it always works on 
the _smallest_ exponent it has "in hand", work should be completed 
reasonably quickly and in reasonable order. (Especially bearing in 
mind that this scheme completes double-checking at the same time as 
first tests!)

When a prime is found, the discovery credit (& any prize money) 
should obviously be split between the two users who ran the last 
iteration on the exponent in question.

> I think as the time increases for each LL test, there
> would be much more time savings in attempting to do
> higher trial-factoring.

Yes, trial factoring to a higher limit does help, but, unfortunately, 
as well as each extra bit of factoring depth costing much time as all 
the previous work, the chance of finding a factor (in a "window" of 
any particular fixed size) decreases steadily as the factoring depth 
increases. There is also going to be a jump in factoring time 
somewhere around 62/63/64 bits due to hardware limitations of the 
Intel architecture.

Don't forget, factoring time is O(2^d/p) whereas LL testing time is 
"only" O(p^2 * log p), where p is the exponent and d is the factoring 
depth in bits.

Finally, although save files for exponents > 33 million are going to 
be a considerable size (14 MBytes for exponents just big enough to 
have 10 million decimal digits), it might be worth considering an 
_option_ where users (with fast network connections) could deposit 
save files for "safe keeping" (as a backup). As has been pointed out 
elsewhere, this would save time in the event that a user goes AWOL. 

Clearly it isn't reasonable to expect users with dial-in connectivity 
to transmit files > 10MBytes long with any frequency, but, for users 
with high-speed connections, it doesn't represent much of a problem.
The other issue is finding enough disk space to store a large number 
of these files ... they don't compress very well, either!

Regards
Brian Beesley
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to