Here's a (hopefully) simple explanation of some of this.

With your sample of n weights, the natural way to think of them is that
you have one random variable, X = weight of a randomly chosen
individual, and you have n observations of this one variable.

The mathematical model of this situation is rather different. We suppose
we have n random variables: X1 = weight of the first randomly chosen
individual, X2 =  weight of the second randomly chosen individual, X3 = 
weight of the third randomly chosen individual, etc. We also suppose
that these variables have the same distribution. Furthermore, we assume
that the variables are independent - this is valid because of the random
selection.

This approach is not unreasonable in practical terms, because it is at
least feasible that as you proceed to take your sample, the distribution
changes, particularly if the population is small, so the assumption of
'identically distributed' is indeed an assumption. More importantly, it
enables us to use the mathematics of functions of random variables. For
example, E(X1+X2+...+Xn) = E(X1) + E(X2) + .... +E(Xn) = n*E(X), so that
E(Xbar) = E(X).



James Ankeny wrote:
> 
>   According to a textbook I have, a random sample of n objects from a random
> variable X, is composed of n random variables itself, namely, X1,X2,...,Xn.
> I am having some difficulties in figuring out how to interpret this. For
> example, suppose that you are considering the population of adult males in
> the U.S., and the random variable is weight. If you take a random sample of
> n individuals, are the elements of the sample random (prior to observing
> them, of course) because you might observe something different in another
> sample due to measurement error? Or perhaps you might get something
> different if you took the sample at a different time when weight has
> changed? Also, if the elements of a random sample are random variables
> themselves, do they have their own parameters, such as mean and standard
> deviation, as well as their own density functions and cumulative
> distribution functions?
> 
>   Also, if a statistic is a function of random variables, can a statistic
> take the form of a density function with a random vector representing the n
> variables? I know, conceptually, that the sampling distribution of a
> statistic is purely theoretical and that it represents how a statistic
> varies from one sample to another. Mathematically, however, I do not
> understand how to represent this, or if the sampling distribution of a
> statistic is analogous to the distribution of a random variable which may
> have a density function.
> 
>   I do not know if these questions even make any sense, but the concepts are
> fairly confusing to me. Any help would be greatly appreciated.
> 
> _______________________________________________________
> Send a cool gift with your E-Card
> http://www.bluemountain.com/giftcenter/
> 
> =================================================================
> Instructions for joining and leaving this list and remarks about
> the problem of INAPPROPRIATE MESSAGES are available at
>                   http://jse.stat.ncsu.edu/
> =================================================================

-- 
Alan McLean ([EMAIL PROTECTED])
Department of Econometrics and Business Statistics
Monash University, Caulfield Campus, Melbourne
Tel:  +61 03 9903 2102    Fax: +61 03 9903 2007


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to