Brian wrote:
> On Tuesday 31 October 2006 06:23, david eddy wrote:
> > Steinar found:
> > > and the following by George Woltman:
> > > | Now the gotcha.  In v22.8, FFT crossovers are flexible.  If you test an
> > > | exponent within 0.2% of a crossover point, then 1000 sample iterations
> > > | are performed using the smaller FFT size and the average roundoff
> > > | error calculated.  If the average is less than 0.241 for a 256K FFT or
> > > | 0.243 for a 4M FFT, then the smaller FFT size is used.
> >
> > Could someone (Brian?:))) sketch a histogram showing how the rounding
> > errors from -0.5 to +0.5 are dstributed in the above case please.
> > I am bemused ATM. There seems to be something spooky about these things.
> 
> OK, I'll dig through my archives...

THX
> 
> However I think you may be being misled by the word "average". What is 
> actually done is to run 1000 iterations. After each iteration the elements 
> are examined and the (apparently) largest roundoff error is found. The 
> average is actually the arithmetic mean of the largest roundoff error in each 
> iteration.

I certainly was misled:
I thought "average roundoff error"  meant  "average roundoff error"!
(Not the average of the maximum error of the 896K elements).

I also see now your failure to appreciate my presumption of a normal 
distribution
for the roundoff error for a single element with integer of given length.

The more bits in the integer, the nearer we are to the corrupted LSBs of the
mantissa, so the greater the rms (standard deviation) of the distribution.

If I were able to look at 1000 iterations (not the first 20 or so which are
always the same, before MOD Mp kicks in), I would compute the mean and standard 
deviation
for each of the elements of the FFT. 1000 samples for each.
Assuming the means are close to zero, I would then see whether the standard 
deviations for
elements of the same length were similar (as I expect), and if so take the 
average of these
for the longest integer elements to obtain an improved estimate.

This rms error for the longest integer enables us to predict the probability of
the LL-test failing due to rounding incorrectly:
As I said before, probability of a rounding error exceeding 10 standard
deviations is ~10^-22, which is more than good enough for the 10^!5
rounding operations to all pass off successfully.

> Go back to the analogy I posted last night. As the max per iteration roundoff 
> error increases, the graininess of the measurement also increases because of 
> the relatively small number of bits in the fractional part. The normal curve 
> just doesn't apply; if you plot these you would expect a sort of bell shaped 
> curve but with a long & fat tail on the high side. However the actual max per 
> iteration roundoff errors measured have a "strange" distribution - not at all 
> smooth, but a very strong tendency to be (small integer)/(power of 2) i.e. 
> there is a strong preference for values such as 0.25, 0.375, ... Yes, even 
> 0.5, if we use a dangerously small run length. This is easy to understand 
> once you remember that what is being measured is effectively a fixed-point 
> value with the decimal point close to the least significant end; when we 
> throw away the integer part what's left has only a very few significant bits.

This explains why taking the maximum of a million elements is such a crude
procedure statistically speaking.

I can understand why you did it, though.

David 


_________________________________________________________________
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
_______________________________________________
Prime mailing list
[email protected]
http://hogranch.com/mailman/listinfo/prime

Reply via email to