In article <[EMAIL PROTECTED]>,
        RD <[EMAIL PROTECTED]> writes:
>On 13 Mar 2001 07:12:33 -0800, [EMAIL PROTECTED] (dennis roberts) wrote:
>
>>1. some test statistics are naturally (the way they work anyway) ONE sided 
>>with respect to retain/reject decisions
>>
>>example: chi square test for independence ... we reject ONLY when chi 
>>square is LARGER than some CV ... to put a CV at the lower end of the 
>>relevant chi square distribution makes no sense
>>
>Hmm... do not want to start flame war but just can not go by such HUGE
>misconception about chi squared test. Indeed exactly reverse is true :
>chi squred test is always two tailed. There is nothing to prove just
>look at the definition : Khi^2(n)=sum(Z^2).

Please amplify what you mean by "just look at the definition": 
If you mean that positive and negative residuals (Obs_i - Exp_i)
both increase the "lack-of-fit", then that is universally recognised.

For a chi-squared test, "two-tailed" means that you are interested 
not only in lack-of-fit [(Obs_i - Exp_i) is typically too far from zero], 
but also in too good a fit [(Obs_i-Exp_i) is almost invariably too small].  
The lack-of-fit is usually of most interest, given that scientists are 
allegedly honest and fairly objective, albeit optimistic about their own 
pet theory/treatment.  The suspiciously-good-fit looks for scientific fraud
(e.g. Mendel, or Mendel's assistants, produced unreasonably good fits to
his theories on wrinkly green peas etc.; Cyril Burt's "data sets" produced 
unreasonably good fits to his theories on IQ and genetic inheritance).

Usually in the "one- vs two-tailed debate", people are talking about
t-tests or similar, where deviations from the null hypothesis in two 
opposing directions (new treatment best / standard treatment best)
are both of interest.  This is totally different from traditional 
chi-squared or similar tests.

>Altogether with many other answers I saw on sci.stat.* this makes grow
>my desire to unsubscribe.

Why?

>Now getting back to original question. If you declared to carry on one
>tailed test and this was not significant your conclusion is simple as
>follows: "We could not show  that reaction time in condition A is
>longer than in condition B.(full stop)". That is your main conclusion.

An alternative main conclusion is: "What a total klutz I was to apply
a ridiculous 1-tailed test when I could have applied a slightly less
ridiculous 2-tailed test").

>Now on the side you can play around to try to explain this (as in your
>case it appears that the reason was in small subset). And conclude
>that to show this you are going to start another study.
>Finally on the subject of your message. My ansver is : ALWAYS DO TWO
>TAILED TESTS. In a nutshell there are two major resons to do two
>tailed tests. First your problem is a good example - you tested if A
>was superior to B instead to test the difference and you failed.
>Second imagine your test reached 5% barreer. In this case you will
>probably give the reader mean difference with its confidence interval.
>This CI is 95% and may contain 0. Seems weird isn't it?
>Incidentally my opinion agrees with international harmonisation
>guidelines. Just dig FDA site to find them. There are half-page
>additional explanations why one tailed tests with 5% are unacceptable.
>The result you can not submit a drug for approval based on studies
>with one tailed 5% rate tests.
>
>I am dermatologist not statistitian and all those questions seems
>obvious to me. I am disappointed.

For me, the only important practical (as opposed to theoretical)
objection to carrying out a 1-tailed test is ethical.  If an amateur
statistician decides that applying 10mg Cu per square metre is no
better for wheat yield than applying 10mg K per square metre,
then deciding to apply 10mg Cu/m^2 is their prerogative, their problem, 
and an example of evolution in action.  However, if they chose to
apply poison to my grandmother because it is no better than medically-
accepted standard treatment for multiple sclerosis, then I would object.
Forcibly.  See "Decision Theory".

More importantly, I would say: DON'T DO TESTS.  Instead, try to find
models that you would be prepared to use to predict the response
in as-yet untried circumstances.
-- 
J.E.H.Shaw   [Ewart Shaw]        [EMAIL PROTECTED]     TEL: +44 2476 523069
  Department of Statistics,  University of Warwick,  Coventry CV4 7AL,  U.K.
  http://www.warwick.ac.uk/statsdept/Staff/JEHS/
yacc - the piece of code that understandeth all parsing


=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
                  http://jse.stat.ncsu.edu/
=================================================================

Reply via email to