In article <asppcs$nh4$[EMAIL PROTECTED]>,
Koen Vermeer <[EMAIL PROTECTED]> wrote:
>On Thu, 05 Dec 2002 22:24:29 +0000, R. Martin wrote:

>>> Now, in my case, I have a known mean (zero) and unknown variance, meaning
>>> that my situation is somewhere in between Kolmogorov-Smirnov and
>>> Lilliefors. Is there a separate test for this?
>> Not that I know of, but there's a lot of things I don't know. :-)

>That doesn't matter. Unless you ought to know of it, ofcourse... But that
>would imply it exist.

>> I'd run both using your known mean and an estimate of the variance
>> and see what the results are.  If your set passes both with flying
>> colors I wouldn't worry about it.  If both are marginal or it

>That is true ofcourse. It very likely passes either both or none. But in a
>more academic context, I was interested in it. Lilliefors states that
>there is a 2/3 relationship between his values and the KS ones. I am
>wondering how much of that can be accounted to the estimation of the mean,
>and how much of it is due to estimating the standard deviation.

There is something I can contribute on this concerning the tail
asymptotics.  For the validity of the asymptotic approximation,
I refer you to my paper with Sethuraman, "Probabilities of 
moderate deviations", _Sankhya_, 1965.

The classical asymptotic distribution of sqrt(n)(F_n(x) - F(x))
is that of a Gaussian process with covariance function (the 
covariance function here is exact) F(x)*(1 - F(y)) for x <= y.
If we have parameters estimated by MLE or other BAN estimators,
this needs to be reduced by the regression on them.  As the
sample mean and variance are uncorrelated, the reductions add.

In the normal case, the reduction for the sample mean is f(x)f(y),
and for the sample variance is xyf(x)f(y)/2.  Now the variance
in both of these cases is still maximized at x=0, but if only
the variance is estimated, there is no reduction.

This means that, if only the variance is estimated, the KS
statistic in the tails will have an approximate probability between
that of the KS without estimated parameters, 2*exp(-2*c^2), and
that of the median, 2*exp(-2*c^2)/(c*sqrt(2\pi), while estimating
the mean substantially reduces the variance.  This is only an
asymptotic approximation for large c; better results are likely
to be done only by simulation.
-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Deptartment of Statistics, Purdue University
[EMAIL PROTECTED]         Phone: (765)494-6054   FAX: (765)494-0558
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to