Brian,
I'm wondering whether we may be misunderstanding each other's
contentions here. I thought you object to at least some of what I
claimed, but now it seems that you're presenting arguments and
evidence that support what I'm claiming.
Since my previous postings may have had careless wording or otherwise
obscured my intentions, and I did not earlier realize the importance
of certain details to the discussion, let me restate what I've meant
to claim:
1. It is more valuable to know a specific factor of a Mnumber than to
know that that Mnumber is composite without knowing any specific
factor.
(There's little dispute about #1.)
2. Claim #1 is true not only from the viewpoint of mathematics in
general, but also from the narrower viewpoint of the GIMPS search for
Mersenne primes.
3. One (but not the only) justification for claim #2 is that, _in
current practice_, a composite status derived by GIMPS from finding a
specific factor is (slightly) more reliable than a composite status
derived by GIMPS from matching nonzero residues from Lucas-Lehmer
tests.
That is, although in theory, or ideally, those two methods of
determining compositeness are equally reliable, there currently exists
a slight difference in reliability, in favor of the factor, from a
practical standpoint.
4. Our experience ("the record"), as documented in the Mersenne
mailing list or GIMPS history, supports claim #3.
- - - - -
Brian Beesley wrote:
>> > AFAIK our record does _not_ show any such thing.
>>
>> Oh? It doesn't?
>
> There is no evidence of any verified residuals being incorrect.
Wait a second -- just yesterday you wrote that you had "triple-checked
thousands of small exponents" (which means they had already been
double-checked) and that "A very few (think fingers of one hand)
instances of incorrectly matched residuals have come to light -
completing the double-check in these cases proved that one of the
recorded residuals was correct".
So it seems that the meaning you're assigning to "verified" is
something like "retested and retested until two residuals match".
Is that a correct interpretation? If not, what is?
My claim #3 means that in practice, factors require fewer verification
runs to produce matching results than do L-L residues, on average.
Do you disagree with that? If not, then don't we agree about claim
#3?
Furthermore, my claim #4 means that the demonstration that factors
require fewer verification runs to produce matching results than do
L-L residues, on average, rests on the observed history _including the
paragraph you wrote from which I just quoted above!_ Do you disagree?
Also, in that same paragraph you wrote, "... - some of the ones where
the accepted residual was recorded to only 16 bits or less, which
makes the chance of an undetected error _much_ greater (though still
quite small) ..." Am I correct in interpreting this to mean that you
think that using 64-bit residuals is more reliable than using 16-bit
residuals? If so, then surely you'll grant that 256-bit residuals
would be even more reliable yet, meaning that there's still room for
error in our practice of using 64-bit residuals. But a specific
factor
is a _complete value_, not some truncation, and so its reliability is
not damaged by the incompleteness which you admit keeps the L-L
residues from being totally reliable - right?
Then you wrote "so far no substantive errors in the database have come
to light", but seemingly contradicted that in the very next sentence,
"A very few (think fingers of one hand) instances of incorrectly
matched residuals have come to light - completing the double-check in
these cases proved that one of the recorded residuals was correct."
... And thus _the other_ recorded residual was _incorrect_.
> Neither is there any evidence that any verified factors are
incorrect.
Depends on the meaning of "verified", of course. :-)
Will Edgington (I think) has reported finding errors in his factor
data base ... even though he verifies factors before adding them.
Mistakes happen. But I think the error rate for factors has been
significantly lower than for L-L residuals.
> Whatever theory states, the experimental evidence is that verified
> factors are no more (or less) reliable than verified LL tests.
Then why don't we triple-check factors as often as we triple-check L-L
results? Oh, wait ... depends on the meaning of "verified", again.
> Suppose a taxi firm runs 10 Fords and 10 Hondas for a year.
[ snip ]
Let's have an example in which the number and nature of the units is
closer to the gigabytes of data items we're slinging around.
> Besides which, checking a factor takes a few microseconds, whereas
> checking a LL test likely takes a few hundred hours.
... which tends to support my claim #3.
> If anything goes wrong during a factoring run, we would be far more
> likely to miss a factor which we should have found rather than vice
> versa.
... which is in agreement with claim #3.
>> How does that compare to the observed rate of incorrect factors
>> discovered after triple-checking _them_?
>
> AFAIK no-one bothers to triple-check factors.
... because we know they're more reliable than L-L results (#3 again),
based on our actual experience (claim #4) with them.
So we seem to be in agreement about my claims #1-#4. I hypothesize
that your previously-expressed demurrers were related to unclear
wording on my part. Okay?
Regards,
Richard B. Woods
_________________________________________________________________________
Unsubscribe & list info -- http://www.ndatech.com/mersenne/signup.htm
Mersenne Prime FAQ -- http://www.tasam.com/~lrwiman/FAQ-mers