On 4 Dec 2001, at 20:36, Gordon Spence wrote:

> >I've triple-checked thousands of small exponents - some of the
> >ones where the accepted residual was recorded to only 16 bits or
> >less, which makes the chance of an undetected error _much_
> >greater (though still quite small) - so far no substantive errors in the
> >database have come to light. A very few (think fingers of one hand)
> >instances of incorrectly matched residuals have come to light -
> >completing the double-check in these cases proved that one of the
> >recorded residuals was correct.
> 
> Currently my team report cleared list shows 338 double checks and 12 double 
> checked factored including this monster

I'm not talking about missed factors. The database shows that all 
the small exponents ( < 1 million) have been factored at least a bit 
or two deeper than the "calculated optimum", so I haven't even 
been trying. I've found quite a few factors of these small exponents 
by running P-1, that's a different story.

My point here is that if we have database entries with at most one 
64-bit residual (so that the matching residuals depend on only the 
bottom 16 bits), so far when I've run a triple-check LL test the 
bottom 16 bits have always matched; indeed when there is already 
one 64-bit residual to compare with, my triple-check has always 
matched that.

The work I'm doing in this area is a marginally useful way of using 
systems which are rather too limited to do other work.
> 
> 6630223 87 DF 195139088771490335223859559 07-Apr-01 07:58 trilog
> 
> (In fact when it was checked in PrimeNet initially rejected it because it 
> was longer than this sort of check was supposed to find! Has anyone found a 
> factor bigger than 87 bits using Prime95?)

Interesting, though George's answer explains this. I wrote the factor 
validation code running on the server; I specifically wrote it to be 
able to handle factors of any size (subject to system memory at 
approx 8 bytes per decimal digit of the factor), with a rider that run 
times for extremely large factors (>> 50 digits) might be 
problematic, given that the server cannot afford to spend very long 
running each validation.

My record factor found using Prime95 is

[Fri Sep 07 21:33:06 2001]
ECM found a factor in curve #199, stage #2
Sigma=6849154397631118, B1=3000000, B2=300000000.
UID: beejaybee/Simon2, P1136 has a factor: 
9168689293924594269435012699390053650369

I've actually found a couple of larger factors using P-1, but these 
don't count as the factors found were composite. This can happen 
because P-1 depends on finding the GCD of a calculated value and 
the number being factored. (So does ECM.) If you're (un?)lucky you 
can find two factors at once, in which case the result of the GCD is 
their product. This is what ECM uses the lowm & lowp files for - 
ECM is often used to try to find more factors of a number with 
some factors already known; when you get a GCD > 1 you have to 
divide out any previously known factors to find if you've discovered a 
new one.
> 
> Of course some of these may be because the original check went to a lower 
> bit depth than the version of Prime95 that I used.

Naturally.

> I know from doing "deep 
> factoring" in the 60m range that one more bit of factoring can find a "lot" 
> of extra factors...

Over ranges of reasonable size, the number of factors you find 
between 2kp+1 and 2Kp+1 should be independent of p, i.e. the 
expected distribution of smallest factors is logarithmic. For factors 
of a particular absolute size, larger exponents make finding factors 
easier. The effort involved is (ignoring complications resulting from 
computer word length, which are certainly not insignificant!) 
dependent on the range of k, not the range of 2kp+1.

> So if we say that as a ballpark figure half of these are 
> due to an increase in factoring depth, then the error rate from this 
> admittedly small sample is 1.78% or in other words of the current 137,924 
> exponents less than 20m with only a single LL test we can expect to find 
> just under 2500 exponents with an incorrect result.

This is an interesting finding - roughly in line with other estimates of 
raw error rates - but I'm not sure I entirely understand the logic. I 
simply don't see how "missed factor" errors during trial factoring 
are related to incorrect residuals resulting from LL testing - except 
that the LL test wouldn't have been run if the factor wasn't missed.


Regards
Brian Beesley
_________________________________________________________________________
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to