In article <[EMAIL PROTECTED]>,
Robert J. MacG. Dawson <[EMAIL PROTECTED]> wrote:
>[EMAIL PROTECTED] wrote (in part):
<> I'm saying that the entire concept of practical significance is not only
<> subjective, but limited to the extent of current knowledge. You may
<> regard a 0.01% effect at this point in time as a trivial and (virtually)
<> artifactual byproduct of hypothesis testing. But if proper controls are
<> in place, then to do so is tantamount to ignoring an effect that, on the
<> balance of probabilities shouldn't be there if all things were equal. I
<> think we need to be cautious in ascribing effects as having little
<> practical significance and hence using this as an argument against
<> hypothesis testing.
> "Practical significance" is relevant if and only if there is some
>"practice" involved - that is to say, if a real-world decision is going
>to be based on the data. Such a decision _must_ be based on current
>knowledge, for want of any other; but if the data are preserved, a
>different decision can be based on them in the future if more is known
>then.
> (BTW: If a decision *is* to be made, a risk/benefit approach would seem
>more appropriate. Yes, it probably involves subjective decisions; but
>using fixed-level hypothesis testing to avoid that is a little like
>saying "as I might not choose exactly the right size of screwdriver I
>shall hit the screw with a hammer". If we do take the risks and
>benefits into account in "choosing a p-value", we are not really doing a
>classical hypothesis test, even though the calculations may coincide.)
> However, if a real-world decision is *not* going to be made, there is
>usually no need to fit the interpretation of marginal data into the
>Procrustean bed of dichotomous interpretation (which is the
>_raison_d'etre_ of the hypothesis test). Until there is overwhelming
>data one way or the other, our knowledge of the situation is in shades
>of gray, and representing it in black and white is a loss of
>information.
This does not seem to be the way that anything is presented in
the scientific literature. From the standpoint of collecting
information, p-values are of little, if any, value, as they
contribute little to being able to compute, or even approximate,
the likelihood function, which contains the information in the data.
The use of p-values is a carryover from the mistaken "alchemy"
period of statistics. and it has always been misinterpreted,
even by the good ones. They tried for answers, before the
appropriate questions had been asked, and until recently,
scientists believed that their models could be exactly right.
--
This address is for information only. I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED] Phone: (765)494-6054 FAX: (765)494-0558
=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
http://jse.stat.ncsu.edu/
=================================================================