On 3 Dec 2001, at 20:38, [EMAIL PROTECTED] wrote:

[... snip ...]
> Nevertheless, I stand by my assertion that "a 'Factored' status is
> better for GIMPS than a 'Two LL' status" ... insofar as our LL testing
> remains less reliable than our factor verification.  Double-checking
> certainly improves LL result reliability, but I think our record shows
> that a verified factor is still slightly (by a minute but nonzero
> margin) more reliable an indicator of compositeness than two matching
> nonzero LL residues.

AFAIK our record does _not_ show any such thing. In _theory_ 
there is a very small chance that we _may_ have accepted an 
incorrect residual even after double-checking. There is a small 
chance in such a case that the residual should have been zero, i.e. 
we missed a Mersenne prime.

I've triple-checked thousands of small exponents - some of the 
ones where the accepted residual was recorded to only 16 bits or 
less, which makes the chance of an undetected error _much_ 
greater (though still quite small) - so far no substantive errors in the 
database have come to light. A very few (think fingers of one hand) 
instances of incorrectly matched residuals have come to light - 
completing the double-check in these cases proved that one of the 
recorded residuals was correct.

[... snip ...]
> Some error sources would be more likely to affect the longer LL test
> than the shorter factor verification.)

Indeed - if errors occur at random intervals, results should get less 
reliable as runs take progressively longer. However there does 
seem to be a wide variation in the reliability of systems. Some 
seem to have a lot of problems, some very few (if any). 

I've detected four problems on my own systems so far; two of these 
were caused by a failed cooling fan on a P100 system (unlike 
current systems, the cooling was _almost_ sufficient even without 
forced ventilation); one was caused by a configuration error - 
running Glucas on an Alpha system I inadvertently allowed "fast" 
arithmetic and turned off error checking, which was a really stupid 
thing to do on a system with a 53-bit mantissa FPU - as this was a 
QA run on a 33 million range exponent, I was lucky to lose only 3 
months work. The last one I never got to the bottom of, but I 
strongly suspect that Prime95's workspace memory was clobbered 
during a software installation on a Windows 98 system.
> 
> > It matters to those who are attempting to fully factor Mersenne
> > numbers, but that's a different project altogether,
> 
> Some of us like to participate in both.  :-)
> 
> > and one that is decades (at least) behind GIMPS. The only reason we
> > do any factoring at all is to reduce the time spent on LL testing.
> 
> But if factoring is not really part of GIMPS's purpose (and I agree it
> isn't), how can a separate factoring effort be "behind" GIMPS at all?
> Aren't they measuring their progress in a different, non-comparable,
> dimension?

Indeed.

There is still a distinction between finding a factor - _any_ factor 
will do, it doesn't even matter if you find a composite factor which 
you can't break down further - which eliminates the need to run a 
LL test in order to eliminate a Mersenne number as a candidate 
prime, and finding all the factors (or even just the smallest factor) of 
a Mersenne number.
> 
> > Besides, if you do manage to find a 75-digit factor of a
> > 2-million-digit Mersenne number, that still leaves a 1999925-digit
> > remainder. Really not much help :-)

(Aside - we're rather more likely to find a 75-bit factor than a 75-
digit factor. In fact, finding a 75-digit prime factor of a "no factors 
known" Mersenne number with an exponent under 80 million would 
be a significant achievement in itself. The slip doesn't alter the 
author's argument.)

At the moment, having found one factor, we quit. That's sufficient 
effort for the purpose of trying to find Mersenne primes. A little 
more work might break the cofactor down further.

Attempting to test the primality of the cofactor - even using PRP 
rather than a stringent test - would be interesting, but would 
unfortunately be about as expensive as running the LL test that 
we've managed to avoid!

> [ big snip ]
> I'm saying that it is rational for someone to decide to factor past
> the Prime95-calculated tradeoff points, and that it is unjustified to
> criticize "extra" factoring on the grounds that going past the
> Prime95-calculated tradeoff points is wasted effort.

It's rational for someone to have fun. If they want to factor past the 
Prime95-calculated tradeoff points, that's fine. Even better if they 
record their unsuccessful efforts, so that other people don't waste 
time searching the haystack for the needle that isn't there.

There is another point here. We _can_ (by trial factoring, and at a 
cost) eliminate at least some prime candidates from the class of 
Mersenne numbers which are so large that we will _never_ be able 
to LL test them - not even if the whole Universe was a computer, 
with storage density of one bit per fundamental particle!

By a logical extension of this argument, in _this_ Universe there 
are three classes of natural numbers - not just a distinction 
between prime and composite - there are also at least some 
numbers (an infinite number of them, in fact) which we will never be 
able to factor or to prove prime by purely computational means, 
because the storage required is greater than the Universe can 
provide.

The challenge is to push the lower bound of this class of numbers 
as high as possible.


Regards
Brian Beesley
_________________________________________________________________________
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers

Reply via email to