> but doing the Pollard-Rho along a LL test would not
> be particularly efficient, anyways.

Or particularly successful. Remember Pollard-rho heuristically expects to find a 
factor p in something along the lines of sqrt(p) iterations. Since we're doing lets 
call it 10^7 iterations, you'd probably be better off trial-factoring to 10^14 - which 
is already done and beyond with guarantees no factor is missed. You might get a lucky 
factor that's larger, but experience (and the law of averages) tells you you aren't 
going to be *that* lucky.

>For every 2 LL iterations, you'd have to do another
>one for the cycle find algorithm and a multiply to
>store up any factor you find. Thats 9 transforms
>instead of 4

Brent's modification of Pollard-rho doesn't require storing the two parallel sequences 
x_n and x_2n. Instead consider x_n-x_k, where k is the greatest power of 2 that's less 
than n. At worst it could take twice as long to find a cycle, but it's at least twice 
as fast.

> And Pollard-Rho is probably not very well suited for
> finding Mersenne factors as it cannot exploit the
> fact that these factor are 1 mod (2p) as P-1 can.

The extra exponentiation at the start of the P-1 algorithm is hardly a great 
exploitation. Note that 'rho' definition of Pollard-rho just means that your iteration 
function should be (pseudo)random - you can create a pseudorandom iteration that does 
exploit the form of factors. Of course, that's no longer an LL iteration though.

> I'm mostly asking for curiosity, whether the LL
> iteration really makes a proper Pollard-Rho
> iteration, especially with the -2.

The classic Pollard-rho iteration x -> x^2+a isn't particularly good with a=0 or a=-2. 
The reason is the way the cycles degenerate. You want one of the cycles mod some 
unknown prime factor to be short. What you don't want is all the cycles to collapse at 
the same time... or never collapse at all. Suppose you applied the same Pollard-rho 
iteration simultaneously to all (or at least many) possible initial points mod N (this 
is a reportedly near-perfect parallelization according to a paper by Dick Crandall). 
Why Crandall's parallelization works is that its inevitable that the application of 
the iteration reduces the number of distinct points on each pass until eventually your 
N initial points are folded down to quite short and detectable cycle lengths. However, 
iterate with 0 and -2 and there are some obvious fixed points (solve the quadratic!) 
and other, less obvious, short cycles. In effect, there are some points you can't 
iterate away no matter how long you keep trying.

There's a good visual indicator that 0 and -2 aren't particularly good. z -> z^2+c is 
the Julia set iteration on the complex plane. Let's assume for the moment that somehow 
the behavior of the Pollard-rho iteration mod N and the behavior of the Mandelbrot 
iteration on C are equivalent - they are, but the mapping between them is hardly 
trivial.

The Julia sets for c=0 and c=-2 are devastatingly boring, their iteration brings you 
no surprises, no pretty pictures. In much the same way you're not going to get any 
exciting factors with this iteration mod N, either.

In Lucas-Lehmer terms, what happens during the LL / Pollard-rho iteration is that all 
the prime factors of the number have interlocked cycle lengths. That's great for 
primality proving (because if you get the expected final residue, you know something 
about *all* prime factors of your number - and hence conclude there can be only one). 
But a failed LL test simply tells you you now know *all* prime factors of your number 
failed identically. There's nothing in the LL recipe that distinguishes any one prime 
factor from any other.

Chris Nash
Lexington KY
UNITED STATES



_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to