Disclaimer: All my commentary is based on recollections of what I did a
few years ago. I believe that there are comments in an archive somewhere
by Peter Montgomery and George Woltman conatining much more accurate
info.
Lucas Wiman wrote:
>
> Sorry if this is sending out twice, I wasn't sure that it got sent the
> first time...
>
That was my fault. I cc'd my response to base.org and the DNS got
confused.
> >> The main problem is
> >> that to get a bound of any reasonable size, and try multiple bases, you
> >> are looking at a time comperable to the LL test.
> (I suppose that on further investigation, with a bound value for factors of
> the exponent at 10,000 we are looking at about 14,000 squarings mod Mp, which
> is really not all that many)
>
You will want to do P-1 in addition to, and after seiving. It would be
foolish to use p-1 to discover a 30 bit factor. If you are doing stage
1 only I believe that I had looked at a b1 of at least 100K which would
translate to about 140K (I'm trusting your numbers) squarings. I think
that that is still less than 5% of the number of squarings for a current
LL test, and would reap factors for about 5% of the numbers so would at
least break even.
> >> Also, one of the keen
> >> things about GIMPS is that in its search for primes, it came up with loads
> >> of factoring data for Mersennes. However, the P-1 algorithm would make any
> >> factor distribution data a lot less usable (as it would not include P with
> >> P-1 having a large prime factor).
> (That is to say, if the P-1 algorithm was used, it still couldn't find all the
> factors <2^64, unless we are talking about incredibly high bound values)
>
> >> On the other hand, the P-1 algorithm can take advantage of the special
> >> form of Mersenne factors. We know that P-1 must be divisable by the
> >> Mersenne exponent, and that in 1/2 of the cases, it is divisable by 8.
> (we know that the factors must be of the form 2*k*p+1, hence the factor-1
> must be divisable by 2*p, or in the form 8*n+/-1, hence the factor-1 must
> be divisable by 2 or 8)
>
> > I'm not sure that I understand what you are saying, but the
> > following are a few observations that I made when I looked at the
> > problem a few years ago. The major stumbling block for an efficient
> > P-1 is the implementation of a time and memory efficient GCD. If one
> > has a good GCD algorithm then a stage 1 only implementation becomes
> > cost effective for exponents around 3 million.
> > However, the cost savings is tiny, probably less than a
> > percent overall. The major benifit is that one does not need to double
> > check the exponent.
>
> Yup, silly me. You are quite correct, of course, that the GCD is the
> most time consuming part.
>
Not exactly. With a reasonable B1 I doubt that it would be the most
time consuming part, but a straight euclidean or lehmer algorithm would
be quite slow. A divide and conquer algorithm would be much faster, but
extremely memory hungry (Violating GIMPS's unstated (but very positive)
invisibility constraint (No noticable effect on the host systems
operation). I believe that one of the reasons that the next version of
prime95 will contain a straight FFT multiply is to support a divide and
conquer GCD, but that is supposition on my part.
> I imagine that from what you are saying, it gets more cost-effective
> the higher the exponent. How does it look at around 33219281?
I think that a P-1 would probably be neccessary. There may be enough
room for ECM to have overcome P-1's free factor of P, but I dont think
so.
>
> -Lucas
> _________________________________________________________________
> Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
> Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers