As always, Jim makes good points.  Don't waste your time searching for 
an example where a 1-alpha confidence interval includes zero effect  but the 
hypothesis test is significant at alpha, or where the confidence interval 
does not include zero but the test is not significant.  As Jim well knows, 
you will not find such a case, and that, of course, is his point.

     The CI approach may or may not be easier to teach, and may or may not 
be less likely to be misinterpreted.  Jim may well be correct that were we 
to use confidence intervals instead of hypothesis tests the typical users 
would find ways to screw up the interpretation of CIs every bit as much as 
they now do with hypothesis tests.  Sigh.

  My point is that the CI gives you everything you get with the hypothesis 
test and more.

    Consider the following two possible outcomes:

A.  p = .051, and a 95% CI for d runs from -0.135 to 8.5415.

B.  p = .051, and a 95% CI for d runs from -.0002 to .0782.

   These two possible outcomes paint very different pictures for me.  When I 
see outcome A, I think that the effect could be small in one direction or 
enormous in the other direction, and I am motivated to do what it takes to 
narrow the confidence interval (get more data, probably).  When I see 
outcome B, I am pretty well convinced that the effect is trivial in 
magnitude, regardless of whether it is in this or that direction, and will 
probably choose to treat it as nil (and I prefer Plavix to aspirin).

Cheers,

Karl W.
----- Original Message ----- 
From: "Jim Clark" <[EMAIL PROTECTED]>


Jim replies:

But we no longer report simply whether an effect is significant or not;
rather, we report the p value of the effect.  I suspect that knowing
that p = .051 would be no more likely to be misinterpreted than the
confidence interval.

>>> [EMAIL PROTECTED] 24-Jun-05 2:12:28 PM >>>



---
You are currently subscribed to tips as: [email protected]
To unsubscribe send a blank email to [EMAIL PROTECTED]

Reply via email to