>I am sorry, but what is kurtosis? Some measure of the speed with which
>the tail of the distribution falls, maybe? Or some sort of curvature?
Kurtosis is the excess of the fourth moment with respect to the normal
distribution.
A distribution with positive kurtosis has longer "tails" than a normal
distribution with the same standard deviation would be.
A good example of a smooth continuous distribution with negative
kurtosis would be defined by the probability distribution function
p = a * e^(-(x-u)^4/b) (for some constants a,b)
>
>My own impression is that since the errors come from adding many many
>small errors we should expect something close to a gaussian distribution,
>except that very large errors should be not just very improbable
>but actually impossible. Large but not so very large errors should
>also have true probabilities smaller than whatever is predicted by
>the gaussian distribution.
The source of the errors is the difference between a floating-point
value and the nearest integer. The error measure is therefore not
the result of summing many small errors.
Also, we are operating in the region where the nearest integer sometimes
has nearly as many significant bits as the precision of the floating-
point number being compared.
What we are trying to find is how far we can push this before we run
into a real danger that we will round off to the wrong integer. The
point is that, the less "guard bits" we can get away with, the larger
the exponents we can use with a particular transform size - and there
is a substantial performance loss for each increase in transform size
in terms of CPU time per iteration.
Regards
Brian Beesley
________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm