When I started reading this thread, I thought that it was a perfect
example of the king of situation that often elicits what for
Professor Rubin is an ongoing sore spot. When I read his response, I
was surprised to see that it wasn't exactly what I was expecting in
that he addressed the specific question rather than the general
issue.

After all these years of reading his general response, I must admit
I've become a believer and, at the moment, am dealing with how I'll
modify my own course next year. So, here's my current take on the
general response. It seems particularly appropriate because the post
was to sci.stat.EDU: Students need to understand about probability
and random sampling and how they form the basis for statistical
methods. If students were thoroughly grounded in theses areas, they
would be able to work through many questions such as this on their
own.

I am not being critical of the poster.  It's a good question and the
poster is doing the right thing by asking.  The issue I'm trying to
address is what we as educators might do to see that questions like
this don't arise in the first place, once students have been
introduced to statistics during their training.

The poster asked, "Given the shortcomings of convenience samples,
does one have to forego any type of meaningful analysis?  Or, can an
analysis be conducted provided an emphatic statement is included by
the researcher about the shortcomings of convenience samples?"

Others have spoken eloquently about the suitability of this sample. 
I'd like to add one comment about convenience samples and research
in general, namely, "Research has consequences!" We have to be very
careful with data like these.  Once they are published, they will be
used by scientists, planners, politicians, or even anyone with an ax
to grind, for whatever goals and purposes the data might favor. The
emphatic statement about the shortcomings will be lost.

Some related comments from my webpage on study design:

"Research is usually conducted with a view toward publication and
dissemination. When results are reported, not only will they be of
interest to other researchers, but it is likely that they will be
noticed by the popular press, professionals who deal with the
general public, and legislative bodies--in short, anyone who might
use them to further his/her personal interests. 

"You must be aware of the possible consequences of your work. Public
policy may be changed. Lines of inquiry may be pursued or abandoned.
If a program evaluation is attempted without the ability to detect
the type of effect the program is likely to produce, the program
could become targeted for termination as a cost-savings measure when
the study fails to detect an effect. If, for expediency, a treatment
is evaluated in an inappropriate population, research on that
treatment may improperly come to a halt or receive undeserved
further funding when the results are reported. 

"One might seek comfort from the knowledge that the scientific
method is based on replication. Faulty results will not replicate
and they'll be found out. However, the first report in any area
often receives special attention. If its results are incorrect
because of faulty study design, many further studies will be
required before the original study is adequately refuted. If the
data are expensive to obtain, or if the original report satisfies a
particular political agenda, replication may never take place."
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to