On Tuesday 31 October 2006 06:23, david eddy wrote:
> Steinar found:
> > and the following by George Woltman:
> > | Now the gotcha.  In v22.8, FFT crossovers are flexible.  If you test an
> > | exponent within 0.2% of a crossover point, then 1000 sample iterations
> > | are performed using the smaller FFT size and the average roundoff
> > | error calculated.  If the average is less than 0.241 for a 256K FFT or
> > | 0.243 for a 4M FFT, then the smaller FFT size is used.
>
> Could someone (Brian?:))) sketch a histogram showing how the rounding
> errors from -0.5 to +0.5 are dstributed in the above case please.
> I am bemused ATM. There seems to be something spooky about these things.

OK, I'll dig through my archives...

However I think you may be being misled by the word "average". What is 
actually done is to run 1000 iterations. After each iteration the elements 
are examined and the (apparently) largest roundoff error is found. The 
average is actually the arithmetic mean of the largest roundoff error in each 
iteration. (IMO the root-mean-square measure would be more realistic than the 
arithmetic mean, but the extra computational overhead simply isn't worth the 
effort, given that the practical effect would simply be to move the 
"critical" value from ~0.24 to something else.)

Note that we're dealing with the absolute value of the roundoff error i.e. the 
range is actually constrained to the range 0.0 to 0.5 (inclusive).
>
> At least the "0.2%" is "evidence" for the sharpness of the crossover point,
> Steinar.

Go back to the analogy I posted last night. As the max per iteration roundoff 
error increases, the graininess of the measurement also increases because of 
the relatively small number of bits in the fractional part. The normal curve 
just doesn't apply; if you plot these you would expect a sort of bell shaped 
curve but with a long & fat tail on the high side. However the actual max per 
iteration roundoff errors measured have a "strange" distribution - not at all 
smooth, but a very strong tendency to be (small integer)/(power of 2) i.e. 
there is a strong preference for values such as 0.25, 0.375, ... Yes, even 
0.5, if we use a dangerously small run length. This is easy to understand 
once you remember that what is being measured is effectively a fixed-point 
value with the decimal point close to the least significant end; when we 
throw away the integer part what's left has only a very few significant bits.

Now consider the roundoff error of each element of the FFT as the 
coin-flipping game. The error doubles in a rather similar way to the length 
of the run of tails. What we have to do is to evaluate a "fair price" (in CPU 
cycles) for "playing the game" so that we will "usually" "make a profit" (get 
the right result) without too much risk of "going bankrupt" over a number of 
games equivalent to the number of individual element calculations in the 
whole L-L test.

Regards
Brian Beesley
_______________________________________________
Prime mailing list
[email protected]
http://hogranch.com/mailman/listinfo/prime

Reply via email to