> On Mon, Oct 30, 2006 at 05:07:03PM +0000, david eddy wrote:
> > If you read my last post, you will "have an idea why I think these ranges  
> > are
> > clear cut". The probability of error varies sharply with exponent size.

Steinar Gunnerson replied:
> 
> Yes, I have read your last post, and I've seen that you keep asserting this.
> However, I've still not actually seen any evidence.

I would like to see evidince from real data, but apart from the rapid change
in probability in say a normal distribution,....

> > If you take an exponent near the bottom of a range and the probability of
> > error, using the FFT size below, is less than ~10% you will do better than
> > survive by using the smaller FFT (as I've pointed out before).
> 
> I'm sorry, but I don't understand this sentence.

The expected time to check an exponent is less. But Brian Beesley has assured
me that the number af such exponents is small.
> 
> I've spent some time reading up on the archives. Even though I couldn't find
> the original posts (in early 2002, I'd guess) describing the ideas, I found
> the following by Brian Beesley:

Thanks for these. They are completely relevant to the sentence you didn't 
understand.

> 
> | Please remember that the crossover points are a compromise between wasting
> | time by using an excessive FFT run length and wasting time due to runs
> | failing (or needing extra checking) due to using a FFT run length which is
> | really too short. There is no completely safe figure.

I suggested that the "break even" compromise was at a failure probability of 
~10%, but
Brian wouldn't want to take that risk for a first time LLtest. Doublechecking 
might
be different.

> 
> and the following by George Woltman:
> 
> | Now the gotcha.  In v22.8, FFT crossovers are flexible.  If you test an
> | exponent within 0.2% of a crossover point, then 1000 sample iterations
> | are performed using the smaller FFT size and the average roundoff
> | error calculated.  If the average is less than 0.241 for a 256K FFT or
> | 0.243 for a 4M FFT, then the smaller FFT size is used.

Hmmm..
This seems to blow my "normal distribution" notion out of the water.
With average 0.24 it would imply far too higher chance of an error over 0.5.

> | 
> | Brian Beesley has been a great help in investigating revised crossover
> | points and analyzing the distribution of round off errors.  We noticed
> | that consecutive exponents can have a pretty big difference in average
> | roundoff error (e.g. one exponent could be 0.236 and the next 0.247).

I would have thought this wasn't statistically significant.
> 
> Furthermore, whatsnew.txt keeps showing minor adjustments to the crossover
> points around that time; versions 19.0, 21.2, 22.2, 22.3, 22.4, 22.7
> and 22.8 all have changes of some sort relating to these issues.

Different versions necessitate this, especially if the crossovers are SHARP !
> 
> In short, I don't believe these issues _are_ all that clear cut. I think
> George and Brian have been doing a great work actually looking into them and
> making decisions based on experimental and other data, but I have a hard time
> believing that there are any obvious yes/no ranges for a given FFT size.

I was not suggesting that it wasn't worth the effort to get them "right".
Qite the contrary as my persistence on this topic demonstrates.

David




_________________________________________________________________
Be one of the first to try Windows Live Mail.
http://ideas.live.com/programpage.aspx?versionId=5d21c51a-b161-4314-9b0e-4911fb2b2e6d
_______________________________________________
Prime mailing list
[email protected]
http://hogranch.com/mailman/listinfo/prime

Reply via email to