----- Original Message -----
From: <[EMAIL PROTECTED]>
To: "Daran" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Monday, December 03, 2001 11:22 PM
Subject: Re: Mersenne: Re: Factoring benefit/cost ratio

> On 3 Dec 2001, at 20:45, Daran wrote:
>
> > Shouldn't that be 1.015 for double-checking assignments?
>
> I think 1.03...

Yes, of course.

> ...However you do have a point. P-1 limits do depend on
> the trial factoring depth,...

Now I'm puzzled.  Obviously both P-1 and trial factoring limits both depend
upon the exponant, so will correlate, but I don't see a direct dependency
between them.

> and are much smaller for DC assignments
> than for first tests, so there is already something "built in".

Right.  I'd noticed that my P-1 probabilities for DC assignments were about
half that for LLs, but I'd attributed that to the differences between the
magnitudes of the exponants.  This makes more sense.

> > Also does the cost part of the calculation recognise the increased cost
of
> > trial-factorisation after 2^64?
>
> Yes. The trial factoring depth is constant at 64 bits from ~8.5
> million to ~13 million. Don't forget that the number of candidates
> which need to be checked is _inversely_ proportional to the
> exponent.

Because any factor must be of form 2kp+1.

[...]

> > Finally if P-1 factorisation were to be spun off into a separate work
unit,
> > then the optimal arangement would be to trial factor while
> > cost_of_trial_factoring * chance_of_P-1_factoring is less than
> > cost_of_P-1_factoring * chance_of_trial_factoring.  Then P-1 factorise.
> > Then complete trial factorisation according to the above formula.
>
> Interesting - but I think the effect would be small.

Perhaps, but the effort to implement (without spinning off P-1 into a
separate work type) would also be small.  The existing formula
(cost_of_trial_factoring < chance_of_trial_factoring * cost_of_LL_test *
2.03) ignores the effect of P-1.  A better formula would be:-
cost_of_trial_factoring < chance_of_trial_factoring * (cost_of_P-1_factoring
+ (1-chance_of_P-1_factoring)*(cost_of_LL_test * 2.03).

This change on its own would surely be trivial to implement.

But then, if the P-1 fails, you might as well carry on and do a little more
TF, which brings us back to the simpler formula I gave earlier.  You might
also want to lower your P-1 bounds a shade to take into account the fact
that a successful factorisation may only save a little extra TF effort.

The end result is a *tiny* reduction in the probability of finding a factor
(because of the slight reduction in P-1 bounds).  But you'll do less work,
and if you do find one, you'll probably find it earlier.  How much earlier?
I've just completed P-1 on M656677 which took 8787 seconds which about 1.5%
of the DC estimated to take 6 days 23 hours 43 minute equals 603780 seconds,
compared with a 2.6% chance of success.  My guess is that this would put it
optimally just before the last round of TF, which is (or should be) close to
the break-even point.

I'd need an estimate for the time of the last round of TF to be any more
precise.  The calculation would also need to be repeated for a first-time LL
assignment.

My final remarks on the subject of early P-1 is the observation that TF and
P-1 search overlapping factor spaces.  Are there any gains to be had in TF
from sieving out some of the smooth k's that an early P-1 failed to find?
Could you structure the factor candidate generation code, so that unsieved
list doesn't contain any smooth k's in the first place?  Even if it's not
possible or cost effective to do either of these, the fact that there are
smooth k's among the candidates should lower the estimate for the
probability of success.

Another reason to spin off P-1 into a separate work type is to allow
machines with heaps of memory to concentrate on this kind of work.  A little
experimentation shows that the probability of a successful P-1 with 400MB is
roughly double that of one with a minimal configuration, and some machines
will be doing LLs/DCs without the ability to P-1 at all.  What I envisiage
is that a virgin exponant would first go out to P-1(which would also do the
first few rounds of TF, according to my earlier formula), then as a
factoring assignment (to completion), then finally to LL and DC.

> What about factoring to a "fractional" depth? With a roughly
> logarithmic distribution of factors, surely about half the factors
> between 2^n and 2^(n+1) would be smaller than 2^(n+0.5), whilst
> searching to 2^(n+0.5) would take only about 41% of the time
> taken to search the whole interval.

This had occured to me.  One could calculate the exact break-even point and
TF thus far and no further.  However the gains would be minimal for many
exponants in the middle of the depth 'bands', which are already being
near-optimally factored.  Even those at the edges of the depth bands can't
be very far from the break-even point.  Also the server would need to record
more information to enable TF to be taken a further at a later date.

What about automating the eliptic curve factoring effort to make it more
systematic?

> Regards
> Brian Beesley

Daran G.


_________________________________________________________________________
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to