Raul Miller wrote:
> On 6/27/07, John Randall <[EMAIL PROTECTED]> wrote:
>> =\sum ((X_i-\mu)^2) -n(\bar X-\mu)^2$,
>> since $\sum (X_i-\mu)=(\sum X_i) - n\mu = n(\bar X-\mu)$.
>
> Ok, but these are all zero for the case where you're working with an
> accurate model -- where \mu and \bar X are equal.
>
> Mathematically speaking, I don't see this as a valid approach.
>

It does not matter how accurate your model is except in the degenerate
case of a constant distribution: \bar X is random variable, while \mu is a
number.  It makes no sense to talk about \mu and \bar X being equal.  It
makes sense to talk about P(\bar X=\mu): however in the case of a
continuous distribution this is always zero.

What is important is its expectation, and E(\bar X)=\mu.  Note that \bar
x, the value you get on an actual sample, is not the same as the random
variable \bar X.  Similarly S^2 is a random variable such that
E(S^2)=\sigma^2, the population variance.

> Numerically speaking, even if they're "just close" instead of "identical",
> we're heading into unstable territory.

We're talking about basic definitions here.  There are several
computational techniques that can give accurate answers.
>
> Put differently: as far as I can see, I have to accept the definition
> of standard deviation as an axiom, if I am going to work with it at
> all.
>
You do not have to accept this as an axiom.  You can come up with any
statistic that is an unbiased estimator of \sigma^2.  The sample variance
with denominator n will not do.  However, you then need to compare the
variance of your statistic with that of S^2.  If it is greater, your
statistic is not as good.

Best wishes,

John

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to