At 03:43 PM 7/13/01 -0400, you wrote:
>The Journal of Statistics Education is now an ASA journal and has moved
>to the ASA web site. Jim Albert's article can be found at
>
>http://www.amstat.org/publications/jse/v3n3/albert.html
>
>Jackie Dietz
not to counter the intent of the above article ... which is a good
discussion of a specific bayesian application ... i think it is important
to point out that simulations make some of the statements made IN the paper
not quite as dramatically correct ... for example, from the paper ... point
2 is:
2 The traditional approach to teach the above inferential concepts is based
on the relative frequency notion of probability. Although this approach is
implemented in practically all elementary statistics textbooks, it can be
difficult to teach. In particular, it can be hard for students to
distinguish a parameter, such as a population proportion p, from the
proportion statistic \hat{p} that is computed from a sample. The idea of a
sampling distribution is often mysterious to students. There are many
concepts included in a discussion of a sampling distribution, such as the
notion of taking a random sample, computing a statistic from the sample,
and then repeating the process many times to understand the repeated
sampling behavior of the statistic. In addition, it can be difficult to
communicate the correct interpretation of statistical confidence
statements. If a student computes a 95% confidence interval from a
particular sample, he or she may think that this particular interval
contains the parameter of interest with a high probability. The student has
to be corrected -- in classical inference, one is confident only in the
coverage probability of the random interval. Likewise, a p-value can be
misinterpreted as the probability of the null hypothesis instead of the
probability of observing a sample outcome at least as extreme as the one
observed. This error is easy to make since the notion of a p-value is not
intuitive. If data are collected, one is interested in the degree of
evidence that is contained in the observed data in support of the null
hypothesis. Why should one be concerned with the probability of sample
outcomes that are more extreme than the one observed?
==========
most decent stat packages allow one to show, via simulations, or ... to let
the STUDENT see if THEY do the simulations ... what really happens and what
these outcomes mean ...
in minitab for example, it is easy to specify some population ... with
parameters ... and take say ... 1000 samples and build CIs around the
sample means ... and actually show that 95% (approximately) of THE
intervals capture mu ... NOT all do ... so, it is therefore easy to show
that if you take ANY particular CI (which might have been theirs with their
data) ... it may or may not contain mu ... ie, there is visual evidence of
the real concept ...
so, i would agree that before software made it fairly easy to do
simulations ... it was HARDER to get students to grasp some of these ideas
but, i don't think that is AS MUCH the case today ... IF simulations are
done ... and done well
however, even with good simulations ... and even with other (bayesian)
approaches ... some students won't get it no matter what you do ...
because, the assumption that IF we just use the RIGHT WAY TO EXPLAIN IT TO
THEM, the students will get it ... is just NOT true
=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
http://jse.stat.ncsu.edu/
=================================================================