On Fri, 7 May 1999, George Woltman wrote:

> Hi all,
> 
>       I'm working on version 19 of prime95 and I need your help.
> In the past, the exponents at which a larger FFT is used was picked
> rather haphazardly.  I simply picked a few exponents trying to find
> one that could run a thousand or so iterations without the convolution
> error greatly exceeding 0.3.
> 
>       For this release, I thought it would be nice to use a more
> scientific approach.  What I've done is run almost 10000 iterations
> of an exponent and used Excel to graph the number of iterations with 
> an error between 0.200 and 0.201, 0.201 and 0.202, etc.  As you can imagine
> you get a Bell-like curve that peaks at some value.
> 
...
> 
> P.S.  A brief explanation of convolution errors:  Since the FFT in prime95
> is done in floating point arithmetic, round-off errors build up.  After the
> FFT, every value is rounded back to an integer.  The convolution error is
> the largest amount that a result value differs from the correct integer
> value.  If the convolution error exceeds 0.5, then the round-back-to-integer
> step will round to the wrong integer value - a catastrophic failure.
> 
> Another way to look at this:  Each FFT value contains about 20 bits of data.
> That is, values ranging from 2^19 to -2^19.  Consider an FFT of 128K elements.
> If each FFT value was close to the 2^19 value, then each result value would
> need 19+19+17=55 bits to hold the result.  Since a float only holds 53 bits
> there would be a loss of information.  Fortunately, probability assures us
> that it is extremely unlikely that every (or even most) FFT values will
> be close to the 2^19 maximum value.

Maybe my approach is completely misguided, but I would prefer to look at
the algorithm/source code and try to prove something. What are the
relevant parts of the source code? Is this part written in some
language which is more or less easy for a human to understand
(such as C instead of assembly)?

Also, I would like to make sure I understand the arithmetic in the last
paragraph. You say the FFT values contain 20 bits of data each and that
they range from 2^19 to -2^19: I assume this means that they are
integers with 19 digits base 2, plus one bit for the sign.
But what are these integers? Surely not an instance of the Lucas-Lehmer
sequence mod p, or they would have as many bits as p, which is around
23 for the values of p being currently tested.
I also assume that 17 comes from 128K = 2^17, but why do we need
19+19+17 bits to hold the result?

By the way, this is a somewhat dumb question, but is even an error of .499
considered ok (when rounding to the nearest integer, I mean)? Probably
not... what is the maximum tolerated error?

Nicolau

http://www.mat.puc-rio.br/~nicolau


________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

Reply via email to