Re: Mersenne: p-1 and trial factoring

2000-02-28 Thread Brian J. Beesley

On 25 Feb 00, at 16:52, George Woltman wrote:

 However, it must be pointed out that at some point you are better off
 switching to ECM rather than expanding the P-1 bounds.  I'm not sure what
 that point is.

And, at some point, ECM itself gives way to CFRAC, SNFS, ...

It's possible to find pathological examples of numbers which are 
easily factored by any one method but practically impervious to the 
others.

The resources needed must be taken into account. ECM on Mersenne 
numbers with exponents well into the millions is going to be slow  
_very_ memory intensive. Let's get P-1 going first  "argue" about 
implemeting ECM (for "trial factoring" of large exponents) later.

Regards
Brian Beesley
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



RE: Mersenne: p-1 and trial factoring

2000-02-27 Thread Paul Leyland

BJB wrote:

 Yes - I think we need this database - with or without savefiles,
 it's a waste of effort to inadvertently duplicate work done before.
 Since P-1 is deterministic (like trial factoring, but unlike
 Pollard's rho or ECM) you should get the same result every if you use
 the same limits on the same exponent. 

If your implementations of rho and ECM are not deterministic, I suggest that
you reimplement and/or get a working machine.

Running rho with the same iterated function and starting value on the same
exponent should give the same result every time.

Running ecm with the same limits and the same curve on the same exponent
should likewise give you the same result every time.

Indeed, rerunning a new implementation with the same parameters as an old
and (presumably) working implementation is one of the important ways of
debugging and tuning the new version.


The degree of freedom in choosing an elliptic curve for ECM is probably what
you're hinting at.  I'd suggest that a database of number of curves at run
each B1/B2 limit is still useful.  George keeps such a database for small
exponents.


Paul


_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



RE: Mersenne: p-1 and trial factoring

2000-02-27 Thread Ken Kriesel

At 02:55 AM 2/27/2000 -0800, Paul Leyland [EMAIL PROTECTED] wrote:
BJB wrote:

 Yes - I think we need this database - with or without savefiles,
 it's a waste of effort to inadvertently duplicate work done before.
 Since P-1 is deterministic (like trial factoring, but unlike
 Pollard's rho or ECM) you should get the same result every if you use
 the same limits on the same exponent. 

...


The degree of freedom in choosing an elliptic curve for ECM is probably what
you're hinting at.  I'd suggest that a database of number of curves at run
each B1/B2 limit is still useful.  George keeps such a database for small
exponents.


Paul

It's my understanding that the choice of curve is randomized intentionally,
and that the space of possible curves for each exponent is so large that
it is more efficient to check random curves than to attempt to eliminate
the very
small chance of calculating the same curve more than once.

This approach is used not only in GIMPS, but in George's implementation of 
Richard Crandall's program for searching for factors of Fermat primes.

It is not necessary or advisable to run all possible curves.
In fact, for F16, my runs produced in sequence, the following seed numbers, 
many of which found the factor 825753601:
Attacking 2^65536 + 1
Selecting elliptic curve seed s = 1561883045:
Selecting elliptic curve seed s = 630999867:
Selecting elliptic curve seed s = 1921480920:
Selecting elliptic curve seed s = 1664751173:
Selecting elliptic curve seed s = 671100398:
Selecting elliptic curve seed s = 2008555722:
Selecting elliptic curve seed s = 112071784:
Selecting elliptic curve seed s = 1157242418:
Selecting elliptic curve seed s = 690847237:
Selecting elliptic curve seed s = 823396944:
Selecting elliptic curve seed s = 1372799571:
Selecting elliptic curve seed s = 244007149:
Selecting elliptic curve seed s = 2013596788:
Selecting elliptic curve seed s = 1958155050:
Selecting elliptic curve seed s = 441279715:
Selecting elliptic curve seed s = 248788694:
Selecting elliptic curve seed s = 831355685:

Note that they go to at least 2 x 10^9.

Ken

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Mersenne: p-1 and trial factoring

2000-02-27 Thread Will Edgington


Reto Keiser writes:

   A lot of factors of exponents between 1 and 100 were found
   using the new P-1 method. Is there a database which contains which
   exponent were tested using which B1 and maybe a database od the
   save files?

The P-1 data is also collected by me, in the 'o:' lines of my
programs' usual format (merfmt.txt).  I also collect P-1 save files
and try to keep track of who is running Factor98 and other (older) P-1
programs on which exponents.

Note that this will likely not affect the new GIMPS and Primenet P-1
efforts, as I'm concentrating on small exponents where complete
factorizations might be feasible; the new GIMPS P-1 efforts will be to
find a first factor to avoid having to do LL tests.

Will

http://www.garlic.com/~wedgingt/mersenne.html
mersfmt.txt
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers



Re: Mersenne: p-1 and trial factoring

2000-02-25 Thread Brian J. Beesley
On 25 Feb 00, at 15:23, Reto Keiser wrote:

> Why can't we do first first the factorization up to n-2 bits (1/4) of the
> trial factoring time, then start the P-1 factoring up to 1/3 of the B1
> value, after this, we can complete the trial factoring process and at the
> end we complete the P-1 (using the save file od intermediate file). (the
> parameters can be optimized)

This sounds fairly sensible. However, this entails splitting the  factoring part into fairly small sub-assignments, which may cause  unneccessary complications with the server. Also, trial factoring and  P-1 are quite different from the point of view of system requirements  - trial factoring uses very little memory (in practice it runs almost  entirely in the L1 cache) whereas P-1 is actually more of a memory  hog than LL testing. So I suspect we want some bias towards early  trial factoring rather than P-1.

> until now >210 factors are found for 10megadiginumbers and more than 280
> exponents were factored up to 68 bits. Some (about 7) 67 digit factors
> were found but none with 68 bits.

This is likely to be a statistical anomoly. A sample size of 7 is a  bit small to condemn the data as biased.

> A lot of factors of exponents between 1 and 100 were found using
> the new P-1 method. Is there a database which contains which exponent were
> tested using which B1 and maybe a database od the save files?

Yes - I think we need this database - with or without savefiles, it's  a waste of effort to inadvertently duplicate work done before. Since  P-1 is deterministic (like trial factoring, but unlike Pollard's rho  or ECM) you should get the same result every if you use the same  limits on the same exponent.

If anyone has any data to contribute, I'd be willing to assemble   publish the database. I also have adequate storage space on my anon  ftp server for save files.



Regards
Brian Beesley
_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers


Re: Mersenne: p-1 and trial factoring

2000-02-25 Thread George Woltman

Hi,

At 03:23 PM 2/25/00 +0100, Reto Keiser wrote:
parallel use of p-1 and trial factoring
---

Why can't we do first first the factorization up to n-2 bits (1/4) of
the trial factoring time, then start the P-1 factoring up to 1/3 of the
B1 value, after this, we can complete the trial factoring process and at
the end we complete the P-1 (using the save file od intermediate file).
(the parameters can be optimized)

I can't see any flaws in your reasoning, although it would be a bit unwieldy
to implement.

no 68 bit factors
-

until now 210 factors are found for 10megadiginumbers and more than 280
exponents were factored up to 68 bits.
Some (about 7) 67 digit factors were found but none with 68 bits.

My database has:

3321966173867482830512390441
3322338783006905661336745889
33221387123317319076102495049
33235409128314644111933147703
33238463131707491089550166169
33230671139408728702078150121
33224957193425473534465274127

That's 6 67-bit factors and 1 68-bit factor.  Not the expected 
distribution, but
nothing to be concerned about yet either.

organization of p-1 factoring
-

A lot of factors of exponents between 1 and 100 were found using
the new P-1 method. Is there a database which contains which exponent
were tested using which B1 and maybe a database od the save files?

All exponents from 2 to 11 were done with B1=1M and B2=40M
Exponents from 11 to 60 (still in progress) were done with
B1=100K and B2=4M.  I still have the save files for exponents below 11.
I think Alex has the save files for the larger exponents.

However, it must be pointed out that at some point you are better off switching
to ECM rather than expanding the P-1 bounds.  I'm not sure what that point is.

Regards,
George

_
Unsubscribe  list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ  -- http://www.tasam.com/~lrwiman/FAQ-mers