Sides?  Tails?

There are hypotheses that are one- or two-sided.
There are distributions (like the t)  that are sometimes 
folded over, in order report "two tails" worth of p-level
for the amount of the extreme.

I don't like to write about these, because it is so easy
to be careless and write it wrong -- there is not an official
terminology.

On Thu, 15 Mar 2001 14:29:04 GMT, Jerry Dallal
<[EMAIL PROTECTED]> wrote:

> We don't really disagree.  Any apparent disagreement is probably due
> to the abbreviated kind of discussion that takes place in Usenet.
> See http://www.tufts.edu/~gdallal/onesided.htm
> 
> Alan McLean ([EMAIL PROTECTED]) wrote:
> 
> > My point however is still true - that the person who receives
> > the control treatment is presumably getting an inferior treatment. You
> > certainly don't test a new treatment if you think it is worse than
> > nothing, or worse than current treatments!
> 
> Equipoise demands the investigator be uncertain of the direction.
> The problem with one-tailed tests is that they imply the irrelevance
> of differences in a particular direction.  I've yet to meet the
> researcher who is willing to say they are irrelevant regardless of
> what they might be.
 [ ... ]

"Equipoise"?  I'm not familiar with that as a principle, though I
would guess....

When I was taught testing, I was taught that using *one*  tail 
of a distribution is what is statistically intelligible, or natural.
Adding together the opposite extremes of the CDF,  as with a
"two-tailed t-test,"  is an arbitrary act.  It seems to be justified
or explained by pointing to the relation between tests on 
two means, t^2 = F.  Is that explanation enough?

Technically speaking (as I was taught, and as it still 
seems to me), there is nothing wrong with electing to 
take 4.5%  from one tail, and 0.5% from the other tail.
Someone has complained about this:  that is "really"  
what some experimenters do.  They say they plan a 
one-tailed t- test of a one-sided hypothesis.   However, 
they do not  *dismiss*  a big effect in the wrong direction, 
but they want to apply different values to it.  I say, This
does make sense, if you set up the tests like I just said.

That is:  I ask, What is believable?  
Yes, to a 4.4% test (for instance) in the expected direction.  
No, to a test of 2% or 1% or so, in the other  direction;
  - but:  Pay attention, if it is EXTREME enough.

Notice, you can take out a 0.1%  test and leave the main
test as 4.9%, which is  not effectively different from 5%.


-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to