> I completely agree with you about the importance of heat on the
> integrity of the CPU chip.  But interestingly enough, heat is
> something I feel I can control (for instance, by adding more or
> bigger fans) and am satisfied about, as long as my system runs
> X hours of self-test, and Y months of Lucas-Lehmer, without
> visible error.

Fair enough ...
> 
> I feel less reassured about "hidden" influences which are less
> directly under my control:
> 
>  - Though I participate in GIMPS, I'm using neither Windows
>    nor an Intel CPU chip.  All I can hope for is that my setup
>    is as 'accurate' as the one prime95 was designed for.

Presumably you're running Linux (or some other variant of unix) on 
an AMD chip. Or just possibly Cyrix.

The FFT code in mprime is the same as the FFT code in Prime95, 
so you really don't have much to worry about there. Whilst AMD 
CPUs may have different undetected bugs in them to Intel CPUs, 
at least practical experience indicates that systematic errors from 
different CPU types are very rare. This is why we do double-
checking - with the different offset, the data being worked on will 
not be the same, and systematic errors in the CPU (or, for that 
matter, branches in the code which aren't oftem followed) will be 
detected by the final residual being different.
> 
>  - From time to time I make changes to my internal system
>    components.  Who is to say that a new adapter card,
>    for example, will not affect the timings and/or phase
>    relationships of the system's internal signals?
> 
Yes. It can happen. I believe (and have said so before now) that 
you should re-run the selftest if you change anything major in your 
system - CPU, memory, operating system and particularly the 
version of mprime/Prime95. You can force this by deleting the 
"SelfTest" lines in local.ini
> 
> As far as making the self-test more rigorous, I don't know enough
> about the internal workings of FFTs, etc.  But applications are
> often tested by submitting the largest possible value, and the
> smallest possible value, and values which will exercise the most
> decision points.  For GIMPS, such "boundary" conditions might be
> in working with the most 'binary ones' or the most 'binary zeros'
> or the most 'carries' or the longest strings of 'binary ones
> alternating with binary zeros', or with values selected to cause
> the greatest change in the residue from one iteration to the next.

Experiments indicate that "all ones" and "all zeros" are 
exceptionally easy on the code. I suspect that we just don't know 
(& find it very hard to predict) which data patterns will be most 
difficult. The LL test itself, after the first few dozen iterations, 
generates pretty random-looking data (but remember, as Knuth 
points out, you can't generate true random data by a deterministic 
process!). In conjunction with the "scrambling" effect of different 
offsets, running a few hundred iterations of particular (suitably-
chosen) exponents is probably as good a way as any of testing the 
FFT code.

The FFT data consists of a lot of bits (at least as many bits as 
there are in the Mersenne number being tested) so it clearly isn't 
practical to test more than a minute proportion of the possible 
"input" values.


Regards
Brian Beesley
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

Reply via email to