On 5 Dec 2001, at 6:09, [EMAIL PROTECTED] wrote:

> Brian Beesley wrote:
> > On 3 Dec 2001, at 20:38, [EMAIL PROTECTED] wrote:
> [... snip ...]
> > > I think our record shows that a verified factor is still
> > > slightly (by a minute but nonzero margin) more reliable an
> > > indicator of compositeness than two matching nonzero LL
> > > residues.
> >
> > AFAIK our record does _not_ show any such thing.
>
> Oh? It doesn't?

There is no evidence of any verified residuals being incorrect. Neither is there any evidence that any verified factors are incorrect.
Whatever theory states, the experimental evidence is that verified factors are no more (or less) reliable than verified LL tests.

Suppose a taxi firm runs 10 Fords and 10 Hondas for a year. None of them break down. On that basis alone, there is no evidence whatsoever that one make is more reliable than the other. Naturally, other companies' experimental evidence may vary.
>
> [ big snip ]
> There is a small chance that we may accept an incorrect factor even
> after double-checking it, but that chance is even smaller than the
> small chance that we may accept an incorrect double-checked L-L
> residual.

I doubt very much that we would accept an incorrect factor. The double-checking is done with completely different code. Besides which, checking a factor takes a few microseconds, whereas checking a LL test likely takes a few hundred hours.

If anything goes wrong during a factoring run, we would be far more likely to miss a factor which we should have found rather than vice versa. This is relatively unimportant from the point of view of finding Mersenne primes; the effect is a small loss of efficiency.

> How does that compare to the observed rate of incorrect factors
> discovered after triple-checking _them_?

AFAIK no-one bothers to triple-check factors. Nowadays factors are verified on the server at the time they're reported. I'm not privy to the server logs, so I simply don't know how many get rejected (except for the server _database_ problem reported recently - not related to the actual factor checking code - which causes problems with genuine factors > 32 digits in length). However, I can think of at least one way of pushing up the rejection rate.
>
> How many of those problems caused errors during L-L verifications,
> and how many caused errors during factor verifications?
>
All during LL first tests, or QA runs (which are LL & DC done in parallel with intermediate crosscheck points).

> However, you may not have spent anywhere near as much time doing
> factor verifications as you have doing L-L verifications, so it may
> not be valid to draw any conclusion about comparative error rates on
> your system.

I've spent no time at all verifying factors - it would take less than a minute to verify everything in the factors database. The total factoring effort I've put in (ignoring ECM & P-1 on small exponents) is only about 3% of my total contribution, so I would expect not to have had any factoring errors. Besides which, trial factoring _should_ have a lower error rate than LL testing, due to the lower load on the FPU (which is usually the CPU element most sensite to excess heat) and the smaller memory footprint (less chance of data getting clobbered by rogue software or random bit-flips).

Regards
Brian Beesley
_________________________________________________________________________
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ      -- http://www.tasam.com/~lrwiman/FAQ-mers


Reply via email to