Ron's post never showed up on my server.
I especially agreed with the first paragraph of Steve's answer.
No one so far has posted a response that recognizes the total
innocence of the original question --
This is not, "Why do we see two things that are almost identical?"
This is, "Why do we see two things 'that don't mean anything to me'?"
If you just want one number, why not "the p-level"?
With the chi squared, you need to have the DF before you can look up
the p-level. The chi squared is best used as a "test-statistic" but
it is not complete all by itself.
The Odds Ratio is not even a "test-statistic", but rather, an Effect
size. Having the OR is like having a mean-difference where you also
need N and Standard deviation before you have a test. With OR, you
have to have the N, plus the marginal Ns, in order to get a Test.
On 17 Jul 2000 10:54:15 -0700, [EMAIL PROTECTED] (Simon, Steve, PhD)
wrote:
> Ron Bloom writes:
>
> >Why do canned software packages
> >quote so many different statistics whose
> >intrinsic tendencies towards "significance"
> >or non-significance are obviously correlated
> >with each other. Is it because folklore
> >somehow plays a large part in what the
> >"right test is" ?
>
> This is a general trait for most software, not just statistical software.
> The vendors want to attract the largest number of customers possible, so
> they throw in everything and the kitchen sink. Look at all the features in
> your word processor. Do you even use 10% of them? But everybody uses a
> different 10%, so cutting back on any specific feature will get some of the
> customer base upset.
>
< snip, rest >
When SPSS gives you contingency chi-squared, it has available, on
request, a number of statistics that are not necessarily useful or
correct or appropriate for your question, but usually, the odd ones
won't be "as significant" as the meaningful ones. Don says, sure, go
with the most significant; and sometimes that is right.
When SPSS gives you a t-test, it gives you two tests, which are
distinguished as to whether you assume that you can "pool" the
variances. If the tests are different, the short advice is that you
ought to "be wary" -- because (1) statisticians agree that you can't
grab the "better" test by p-value; and (2) statisticians agree that
you can't select a test merely by looking at whether the VARIANCES
differ in this sampling. (See my stats-FAQ for more comment on this.)
Also, you can't report that the 't-test' is significant, when the test
with small p was the Levene test on variances rather than the t-test
on means. (Similarly, for the paired t-test, the test on the
'correlation' - usually significant - is not the same as the test on
the 'difference'.)
Then, SPSS Reliability was originally programmed by someone at a
university who noticed that there were at least three different math
problems that used the same data organization; so they wrote in all
the options (still there as of version 6., anyway). The motive was
more like, "doing something clever", rather than commercial rewards.
-- the new, naive user would invoke all the options, since the manual
never it made it clear that there were, indeed, different problems.
You might get a test that is significant, but you better know whether
it TELLS you anything that you are supposed to know.
--
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html
=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
http://jse.stat.ncsu.edu/
=================================================================