Hi there

> Well, see the topic. In other words, is it possible to get a quick "P-1
> factoring for dummies" rundown?

Sure, let's give it a try. Suppose we do some calculations modulo some
number N. In effect we're actually doing all the calculations modulo all the
prime factors of N, but of course we don't know what those prime factors
are. But if we could find a calculation for which one of the prime factors
would 'disappear', it would help us find that factor.

Fermat's little theorem tells us that a^(p-1)=1 mod p, for all primes p. In
fact, for each a there is a smallest number x>0 such that a^x=1 mod p (the
exponent of a modulo p). All we usually know about x is it divides p-1 The
idea behind P-1 factoring is this, if we compute

R=a^(some big very composite number)-1 mod N

if our 'big composite number' is divisible by x, what is left will be
divisible by some unknown factor p of N, and we could find p by examining
gcd(R,N).

Usually the big composite number is a product of powers of many small
primes, so it has many factors and there is a good chance the unknown x
(which is probably p-1) is a factor of it.

> At which point is P-1 factoring worth the effort?

Probably as soon as your factoring attempts have exceeded what the machine
is comfortable with. If you've reached 2^32, try P-1 for a little while.
Trial-factoring will be slower if you carried on trying factors, also your
probability of success with trial-factoring large numbers is extremely low.
(p divides random N with probability 1/p).

Of course P-1 may fail. You may have to go a very long way before the
unknown x divides your big composite number - what if x itself has a large
prime factor? P-1 would not find it until your P-1 loop reached that prime
(in other words, your limit has to be bigger than the biggest prime factor
of the unknown x). However there are factors that P-1 finds 'easily', and
even a failed P-1 run can tell you a little more information about the
number which might help if you try some more trial-factoring. (you know for
instance that any factor must have some prime, or power of prime, factor of
p-1 above the limit).

> Does it have any overlappign with ECM?

The theory's very similar. Both algorithms attempt to find a point where one
of the unknown prime factors 'disappears' from the calculation. However ECM
has better odds. In P-1 attempts, the unknown x is fixed. You can't do
anything about it, and even if you try using a different base a, you're very
likely going to have a similar x with the same problems (a large factor). In
ECM, choosing a different curve could give an equivalent 'x' value that
varies a lot. One of those 'x' values may disappear quite quickly from the
calculations. (but again, with large values it could be a long time before
that happens).

Of course the steps involved in ECM factoring are a little more complex than
P-1...

> How should the bounds
> be chosen for any given exponent, and will higher bounds always find any
> factors smaller bounds would have, or is there an advantage to running
> lower bounds? As I understand, every run with _same_ bounds would find
> the same factors?

In theory you can change the base and the bounds. Changing the base often
has little or no effect, unless you are incredibly lucky (of course,
'obvious' bases related to the number are likely to be of little use - don't
use base a=2 for Mersennes!). Changing the bounds though can make the
difference between finding a factor, and not finding one. We may fail when
we test

a^(some small composite number)-1

but we may succeed when we test

a^((some small composite number)*(some larger composite number))-1

By writing it like this you can also see that the larger bound succeeds if
ever the smaller bound does. (the top line always divides the bottom line,
so any factors of the top also appear at the bottom). Of course larger
bounds take longer to calculate, and there is also a possibility that larger
bounds would find "more than one" factor in one run. Ideally you check
periodically through the calculation to see if you have already met a
factor, but that might take time.

The overriding "decision factor" is based purely on the time you're willing
to spend. Factoring being explicitly harder than primality testing, you
might be happy with, say, spending 10 times as long searching for a factor
as you would on a proof the number was composite. So you might find "some
very big composite number" 10 times the bit length of N was acceptable. 10
times the bit length of N is a good ballpark estimate for the bounds setting
for the P-1 test to get that sort of time required.

Of course if you were willing to spend 100, 1000 times as long, you could
set the bounds higher... but in that case, bear in mind that the P-1 test is
often unsuccessful. If you have that much time to spend, you might prefer to
dedicate it to a more sophisticated algorithm. Just like trial-factoring,
you have to increase the bounds *A LOT* before your chances of success
improve significantly - ultimately they are related very closely, because
success depends on the factors of the unknown magic number x.

Hope this helps

Chris Nash
Lexington KY
UNITED STATES


_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to