In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Robert J. MacG. Dawson) wrote:
>
>
> > Wrt to your example, it seems that the decision you are making about
> > practical importance is purely subjective.
>
> What exactly do you mean by this? Are you saying that _my_
> example is purely subjective but that others are not, or that the
> entire concept of practical significance is subjective? And, if so, so
> what? Does it then follow that it is more "scientific" to ignore it
> entirely?
I'm saying that the entire concept of practical significance is not only
subjective, but limited to the extent of current knowledge. You may
regard a 0.01% effect at this point in time as a trivial and (virtually)
artifactual byproduct of hypothesis testing. But if proper controls are
in place, then to do so is tantamount to ignoring an effect that, on the
balance of probabilities shouldn't be there if all things were equal. I
think we need to be cautious in ascribing effects as having little
practical significance and hence using this as an argument against
hypothesis testing.
> Fair enough: but I would argue that the right question is rarely "if
> there were no effect whatsoever, and the following model applied, what
> is the probility that we would observe a value of the following
> statistic at least as great as what was observed?" and hence that a
> hypothesis test is rarely the right way to obtain the right answer.
> Hypothesis testing does what it sets out to do perfectly well- the
> only question, in most cases, is why one would want that done.
I agree with this. From what I gauge from your rephrasing of the
research question, there seems to be no reason why most research
questions could not be phrased in this manner. Rather, it seems that the
problems with hypothesis testing result from people misusing it. Like I
said before, I don't think this can be seen as a problem with hypothesis
testing; but it is a matter for hypothesis *testers*.
> Fair enough... I do not argue with your support of proper controls.
> However, in the real world, insisting on this would be tantamount to
> ending experimental research in the social sciences and many
> disciplines within the life sciences. (You may draw your own
> conclusions as the advisability of this <grin>
Certainly, one could argue that anyone who wants to test a hypothesis
needs to adhere to same guidelines. The fact that this frequently
doesn't happen is, again, the fault of people not principles. One quick
glance at the social psychology literature, for example, reveals a
history replete with low power, inadequate controls and spurious
conclusions based on doubtful stats. (I'm going to annoy somebody here I
just know it <grin>).
> - I will venture an opinion that it ain't a-gonna happen, advisable or
> no.) There are always more experimental variables than we can control
> for, and there are often explanatory variables of interest that it
> would be impossible (eg, ethnic background - unless we can emulate the
> aliens on the Monty Python episode who could turn people into
> Scotsmen!) or unethical to randomize. The best that one can hope to
> do in such situations is control for nuisance variables whose effects
> judged likely to produce a large effect, and accept that any small
> effect is of unknowable origin.
I fully agree, although I would amend unknowable origin to _presently_
unknowable origin. And I think this really hits the core of the issue:
small effects, no matter where they come from often turn out to be big
effects (or disappears entirely) when greater knowledge allows us to
refine proper control conditions. I think that is a valuable asset of
hypothesis testing. It demands stringent adherence by its users but it
rewards vigilance.
Chris
Sent via Deja.com http://www.deja.com/
Before you buy.
=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
http://jse.stat.ncsu.edu/
=================================================================