At 10:13 AM 5/11/99 +, you wrote:
The theoretical distribution is most definitely not an F-distribution,
though the shape may be reminiscent.
If we knew what the distribution is theoretically, we could fit it. I'm a
former temporary part-time adjunct instructor of statistics for a minor
Yesterday I wrote
Could I suggest that the maximum convolution error is likely to occur
when whole blocks of bits are 1, thus forcing large integers into the
transform so that rounding/truncation errors are most likely to occur.
Consider the LL test algorithm x' = (x^2 - 2) (mod 2^p - 1)
though. So outside about 14 sigmas you should be able to say the
probability is below 10e-40. The problem is that if there are small
deviations from "Gaussian-ness" way out on the wings of your distribution,
the REAL probability of a certain result is not well approximated by the
Error
On Mon, 10 May 1999, Brian J Beesley wrote:
though. So outside about 14 sigmas you should be able to say the
probability is below 10e-40. The problem is that if there are small
deviations from "Gaussian-ness" way out on the wings of your distribution,
the REAL probability of a
I am sorry, but what is kurtosis? Some measure of the speed with which
the tail of the distribution falls, maybe? Or some sort of curvature?
Kurtosis is the excess of the fourth moment with respect to the normal
distribution.
A distribution with positive kurtosis has longer "tails" than a
though. So outside about 14 sigmas you should be able to say the
probability is below 10e-40. The problem is that if there are small
deviations from "Gaussian-ness" way out on the wings of your distribution,
the REAL probability of a certain result is not well approximated by the
Error
At 12:00 PM 5/8/99 -0400, Chris Nash wrote:
This is a real good point - if we are assuming a Gaussian distribution, then
we are assuming the best case. The worst case is given by Tchebycheff's
theorem, which states that, given a probability distribution where only the
mean and standard deviation
At 06:17 PM 5/8/99 -0500, Ken Kriesel wrote:
Aren't gaussians symmetric about the mean value? What George plotted is not.
Yes, but isn't too far off. But it does drop off quite a bit faster onthe
left. That is why I think it may be better to chop the data at the mean, throw
away that under
Hi all,
I'm working on version 19 of prime95 and I need your help.
In the past, the exponents at which a larger FFT is used was picked
rather haphazardly. I simply picked a few exponents trying to find
one that could run a thousand or so iterations without the convolution
error greatly
George,
You indicate that that the error distribution looks "like a Bell curve".
There is reasonable theoretical basis for the errors to follow a Bell
curve. The sum of many random plusses and minuses combine as in the
classic "random walk" problem to give a Gaussian probability distribution.
10 matches
Mail list logo