On Monday 30 October 2006 20:33, david eddy wrote:
>
> I suggested that the "break even" compromise was at a failure probability
> of ~10%, but Brian wouldn't want to take that risk for a first time LLtest.
> Doublechecking might be different.

I don't see why DC should be any different... only the server really needs to 
make a distinction between first and subsequent LL tests.
>
> > | Brian Beesley has been a great help in investigating revised crossover
> > | points and analyzing the distribution of round off errors.  We noticed
> > | that consecutive exponents can have a pretty big difference in average
> > | roundoff error (e.g. one exponent could be 0.236 and the next 0.247).

And this persisted with more iterations. I presume that (inevitable) rounding 
errors in the "magic numbers" used in the transforms interact differently 
with different exponents.
>
> I would have thought this wasn't statistically significant.

It is. It surprised me when I was doing the experiments, but I'm not 
responsible for "reality". Treating analysis of errors as experimental 
science made more sense than treating error analysis theoretically, as theory 
(as known to me) didn't seem able to cope with the fact that the numbers 
involved are discrete rather than continuous.
>
> > Furthermore, whatsnew.txt keeps showing minor adjustments to the
> > crossover points around that time; versions 19.0, 21.2, 22.2, 22.3, 22.4,
> > 22.7 and 22.8 all have changes of some sort relating to these issues.

This work was done around v22. Minor tweaks may still be needed, especially to 
larger run lengths, as experience is gathered.
>
> Different versions necessitate this, especially if the crossovers are SHARP
> !
If you mess with the underlying algorithms you will very likely need to fiddle 
run length cutoffs. If you just change the implementation an existing 
algorithm by changing prefetch, block size etc. then it shouldn't be 
necessary to change the run length cutoffs.
>
> > In short, I don't believe these issues _are_ all that clear cut. I think
> > George and Brian have been doing a great work actually looking into them
> > and making decisions based on experimental and other data, but I have a
> > hard time believing that there are any obvious yes/no ranges for a given
> > FFT size.

Well - except for exponents close to run length cutoffs, the choice is pretty 
clear cut. If you choose the next smaller run length you are bound to get a 
wrong result; if you choose the next larger run length, you are wasting a 
very significant number of CPU cycles.

What is now done for exponents close to the cutoffs is to experiment on the 
actual exponent & choose a run length dependent on the outcome. We have 
reason to believe that this will rarely waste 100% of the CPU cycles due to 
picking "too small", and not often waste ~25% of the CPU cycles due to 
picking safe but "too big".

Regards
Brian Beesley
_______________________________________________
Prime mailing list
[email protected]
http://hogranch.com/mailman/listinfo/prime

Reply via email to