On 17 Aug 99, at 15:36, Lucas Wiman wrote:
> 
> Isn't this the "optimal" configuration if all computers in GIMPS
> were identical?
> 
> There are a number of 486's that are doing only factoring, does
> it take these into account?  Think about it, there are a number of
> computers which should be factoring numbers faster than LL
> tests can be performed.  Call this "factoring profit."  Wouldn't
> it make sense to keep factoring profit as low as possible, as this
> could speed up the more immediate consern of sooner-to-be-performed
> LL tests?

I questioned George about varying the factoring depth for different 
processor types in v19; his reply was that he was treating all 
processors as if they were P6 family. That means that P5 will spend a 
higher percentage of time factoring than is optimal; it also means 
that factoring assignments will take longer, thus reducing the lead 
of factoring over testing.

I don't think you should be too worried about 486s. There aren't that 
many left in the project, and they take about a month to run the 
current factoring assignments (compared with about 4 days on a P100).

What may well be a problem in the near future is the fast systems 
still running v17. Since double-check assignments for exponents <2^22 
are rapidly running out, they will have to be given factoring instead 
8-(
> 
> As an example, I just recieved the factoring assignment of 10258511,
> but I should think that we wouldn't even start these tests for some time.
> Why not go back and factor some 8M exponents further so as to save a
> few current LL tests?  Wouldn't this actually get us to the next prime
> quicker?
> 
> Or do I need to get more sleep :)?

Maybe you need more sleep. Pick a single exponent in the 8M range; 
there's a good chance that you could spend hundreds of CPU years 
trying to factor the corresponding Mersenne number by _any_ known 
method and not get anywhere, whereas somewhere around half a P90 CPU 
year will yield a solid result.

What we want to minimise, for an "average" system (probably the P6 
model is close enough), for any particular exponent is Tf + (1-p)Tl, 
where Tf is the estimated factoring time (bearing in mind that 
factoring may be truncated when a factor is found), p is the 
probability that factoring will be successful and Tl is the estimated 
time to run a LL test. Clearly we will find the next Mersenne prime 
faster if we complete testing more exponents; we will maximize the 
number of completely tested exponents with whatever resources we have 
available if we minimize the expected effort needed to completely 
test an exponent.

The idea of factoring to a particular limit is that trial factoring 
is O(2^d/p) whereas LL testing is O(p^2 log p). If you go 1 bit 
deeper, you double the factoring effort but find a lot less than 
twice as many factors. If you go 3 or 4 bits deeper, trial factoring 
(with an uncertain outcome, somewhere about 1 in 4 chance at best of 
finding a factor) actually costs as much CPU time as running a LL 
test. Mind you, the current limits are set based on theory and 
statistical data which may not be entirely above suspicion; if you 
can give a sound theoretical argument or point out unambiguous trends 
in trial factoring success rates which clearly indicate that it would 
be cost effective to factor deeper, I'm sure George would implement 
the neccessary changes.

Meanwhile, those people running factoring assignments (on slow 
machines, or otherwise) are saving those people who wish to 
concentrate on LL testing the 5 to 10 percent of CPU cycles which 
they'd otherwise have to devote to trial factoring 8-)

Sooner or later, we *will* get round to running LL tests on those 10 
million range exponents, so the effort isn't wasted.

I wouldn't wish to encourage users with fast systems to run factoring 
instead of LL tests by choice - I don't want to see factoring get 
_too_ far ahead of LL tests, or LL tests _too_ far ahead of double-
checks, for that matter - the balance is probably about right at the 
moment (certainly far better than it was this time last year, when 
very few people were running double-checks, because Intel systems 
weren't eligible until v17). If anything I think we should encourage 
some of the people running factoring on reasonable systems to run 
double-checks instead.

Regards
Brian Beesley
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to