On Mon, 10 May 1999, Brian J Beesley wrote:

> > > though.  So outside about 14 sigmas you should be able to say the
> > > probability is below 10e-40. The problem is that if there are small
> > > deviations from "Gaussian-ness" way out on the wings of your distribution,
> > > the REAL probability of a certain result is not well approximated by the
> > > Error Function result.

...

> In fact, any distribution with negative kurtosis will be a better case 
> than a Gaussian distribution in this respect, provided that it is 
> symmetric (zero skew) and reasonably well behaved.
> 
> On numeric grounds, we should expect the distribution of the 
> underlying errors to have a negative kurtosis. This is because of the 
> finite precision of the result - the data points themselves are 
> uncertain to an extent, e.g. if the data is accurate to only 4 bits 
> precision then a "true" value of 0.1 could be recorded as either 
> 0.0625 or 0.1250. This effect tends to reduce the kurtosis (the 
> central "hump" of the distribution is flattened and broadened - this 
> causes an overestimation of the population standard deviation from 
> a sample of observations, however large).

I am sorry, but what is kurtosis? Some measure of the speed with which
the tail of the distribution falls, maybe? Or some sort of curvature?

My own impression is that since the errors come from adding many many
small errors we should expect something close to a gaussian distribution,
except that very large errors should be not just very improbable
but actually impossible. Large but not so very large errors should
also have true probabilities smaller than whatever is predicted by
the gaussian distribution.

http://www.mat.puc-rio.br/~nicolau

________________________________________________________________
Unsubscribe & list info -- http://www.scruz.net/~luke/signup.htm

Reply via email to