>> If you are doing stage
>> 1 only I believe that I had looked at a b1 of at least 100K which would
>> translate to about 140K (I'm trusting your numbers) squarings. I think
>> that that is still less than 5% of the number of squarings for a current
>> LL test, and would reap factors for about 5% of the numbers so would at
>> least break even.
My numbers are quite suspect. I just did the primorial of _10,000_ not
of 100,000 (though, the prime number theorem tells us that the primorial
function should approach e^x , hence the number of bits would be linear).
I also did not take into account that (for example) it is
more likely that a number (p-1 in this case) will be divisible by 4,
rather than 5. That is to say, instead of taking the strict primorial,
I should have found the product of p^floor(log_p(B1)), p prime, from 1 to B1
(note that p^floor(log_p(B1)) is the largest power of p not greater than B1,
because if it was bigger than B1, getting a bigger B1 value would make more
sense).
(whew!)
Well, after all that, I found out that David's misinterpretation of my
original estimate is very close (shows the power of S.W.A.G.) The number
of bits for B1=100,000 is 144,330. (plus the bits of the exponent, obviously)
(Off topic, when I computed that value in Landon's Calc program, my computer
PII/233 started making a weird humming noise. The noise stopped when the
calculation finished. Very strange indeed...)
> There is obviously a breakeven point between trial factoring (which
> will _definitely_ find any factors up to the limit specified) and P-1
> (which might not). So, are we in any sort of position to estimate
> where that breakeven point might be, for exponents say around 10
> million?
Yes, there is no point in finding a 32-bit factor with something that
might take weeks, when it would take ~1/5,000th of a second to find it
with Brian's factor95 program.
I suppose the estimate is based upon the probability that one number would
divide another, based on the factors of the numerator. I'll get back to you.
> The fact that we would still be running trial factoring up to some
> limit (though maybe a little lower than we are at present) would to
> some extent remove the objection that an exhaustive search of at
> least part of the range of possible factors is useful.
As I mentioned earlier. However, we do have loads of factoring data,
plenty enough to test conjectures with...
(well, probably enough, anyway)
Besides, who knows, maybe the factors of Mersenne numbers with small factors
of P-1, follow a very strict rule, while they get less predictable when the
factors of P-1 get large.
> There is going to be another breakeven point between P-1 factoring
> and LL testing - if we run a LL test, then (barring accidents) we get
> a definitive result in a predictable time, whereas we could run P-1
> "for ever" on even quite a small exponent without arriving at a
> definite result.
Same is true of trial factoring. What might be useful is if someone were to
factor a large number of the k in 2kn+1 (n is the exponent) and see if any
patterns emerge (i.e. find out if the p-1 usually takes special factors, other
than the usual ones). This would further increase the "deterministic gap,"
even though it might increase the number of factors found.
Man, the p in P-1 and the usual p in Mp are easy to get confused!
> Can we speculate on approximately how much memory would be needed, as
> a design exercise? In any case, it needn't rule the scheme out
> altogether if we find that some systems aren't able to contribute
> usefully by reason of limited memory. After all, not all systems can
> contribute usefully to some other tasks, e.g. it's a bit pointless to
> try to run a LL test on a 7 million range exponent on a 486, though
> the program _will_ try, if you ask it to!
Well, this might not be a problem. If P-1 tries more than one base,
all that is required (per Mn) is *one* gcd. We just multiply the
remainders from the bases mod Mn, and then take the gcd on the final
remainders. We could have the program prompt the user for the time when
the computer is on but not being used when they ask for a P-1 assignment.
Or, we could have it run so that the program stops functioning when the
computer is being used. It could just save the intermediate files to
disk, to free up memory for the user who (graciously) gives us their
CPU cycles.
-Lucas Wiman
_________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers