Daran wrote:
> I recently kluged together a script to analyse the factors in my results.txt
> file for smoothness, and discovered the following results, among others.
> 
> P-1 found a factor in stage #2, B1=45000, B2=675000.
> UID: daran/1, M7893989 has a factor: 34859249922062613959
> =2*167*9533*1386901*7893989
> 
> P-1 found a factor in stage #2, B1=40000, B2=650000.
> UID: daran/1, M7726057 has a factor: 24561404952170157528115369
> =2^3*3*29*113*863*5861*7726057*7991441
> 
> Neither of which were k-smooth to B2
> 
> A search on "Pollard's Method" came up with this webpage
> http://www.users.globalnet.co.uk/~aads/Pminus1.html which lists several more
> from GIMPS, some of which with unfeasibly large B2 values needed to make the
> factor k-smooth.
> 
> One possible explanation for this is pure luck.  The P-1 method will find
> all k-smooth prime factors, but there is no guarantee that other factors
> won't pop out, just no particular reason to expect them to.  The chance of
> this would seem remote.

It is indeed remote, and it is not the cause for these two factor being 
found.

f1 = 34859249922062613958
f2 = 24561404952170157528115369

(f1-1) / 1386901 = 25134634643758, and
3^25134634643758 == 32315646750132086258 != 1 (mod f1)

so 3 is not a 1386901-th power in Z/Zf1, and

(f2-1) / 7991441 = 3073463841148318248
3^3073463841148318248 == 8000871351421523004381452 (mod f2)

so 3 is not a 7991441-th power in Z/Zf2, either.

The P-1 algo will really have to exponentiate by 1386901 and 7991441, 
respectively, to find these two factors. At least when starting with 3 
as the base for the exponentiation, which Prime95 does.


> Another explanation is that I vaguely remember someone saying that an
> optimisation to the implementation of stage 2 had a side effect of possibly
> introducing new factors, but I don't remember the details.  Can anyone shed
> any light on this?

This is the Brent-Suyama extension, aka Suyama's powers. In short, if 
you choose a Suyama's power E, a factor f will be found if the largest 
factor of f-1 divides some (mD)^E - d^E, where D is an integer chosen 
according to available memory, m and d are integers so that B1 < mD-d <= 
B2, 1 <= d < D and mD-d is prime.

For your f1 factor, E was some multiple of 4, D some multiple of 210 but 
at least 420, and so with m=1012 (if D=420) and d=331 (mD-d=424709)

(1012*420)^4 - 331^4 = 32637674859096798947279 =
73 5827 130261 424709 1386901

There's the missing factor of f1-1!


For f2, m=916 and d=19 (mD-d=384701):

(916*420)^4 - 19^4 = 21906805696240066429679 =
59 * 6521 * 18521 * 384701 * 7991441


> One of the things I would like to do is to be able to determine, for each
> factor, what the minimum values of B1 and B2 would have to be for it to have
> been uncovered by a P-1 test, irrespective of what actual bounds were used,
> or even if it was found using another method.  This is easy to do in the
> case of the k-smooth factors.  Is there any way to characterise these
> additional ones?

That's kinda hard to do, because the choice of D and E depends on the 
implementation. Prime95 chooses D so that phi(D)+E+4 (phi() is Euler's 
totient function) residues fit into memory, and chooses E like this:

                 if (D <= 180) E = 2;
                 else if (D <= 420) E = 4;
                 else if (D <= 2310) E = 12;
                 else if (D <= 6930) E = 30;
                 else E = 48;

(see ecm.c, choose_pminus1_plan() )

The problem is, less memory gives you more different values of mD, more 
memory gives you more different values of d. So which values are 
produced by (mD)^E-d^E depends a lot on available memory. You can 
actually *fail* to find a given factor after increasing available 
memory, although that is very unlikely. The B2 value influences the 
choice of D, too, as D is normally capped at sqrt(B2-B1), except if that 
would make D<2310.

The easiest way to determine which bounds are minimal for a given factor 
and memory settings would be to just try it in Prime95. Just halve the 
B2 interval until you find the limit. You can keep a stage 1 residue by 
running P-1 with B2=B1, saving the p??????? file to another directory, 
and copying that file back into your Prime95 directory for each new 
attempt to avoid doing stage 1 again each time.

The coding intensive, but more cpu time conserving way would be to write 
a program that generates the (mD)^E - d^E values, imitating Prime95's 
choice of D and E values for a given amount of memory, B1 and B2 values, 
and looking for values that your missing factor of f-1 divides.

> Regards
> 
> Daran G.

Alex

_________________________________________________________________________
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to