In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (dennis roberts) wrote:
>
> thus, the idea is that 5% and/or 1% were "chosen" due to the tables
that
> were available and not, some logical reasoning for these values?
>
> i don't see any logic to the notion that 5% and/or 1% ... have any
special
> nor simplification properties compared to say ... 9% or 3%
>
> given that it appears that these same values apply today ... that is,
we
> have been in a "stuck" mode for all these years ... is not very
comforting
> given that 5% and/or 1% were opted for because someone had worked out
these
> columns in a table
I agree, and I think perhaps that although the original work focused on
the 5% and 1% levels for practical reasons, the tradition persists b/c
it provides a convenient criterion for journal editors in deciding
between 'important' and 'unimportant' findings. Consequently, to
increase the chances of being published, researchers sometimes resort to
terms like "highly significant" in referring to low p values, which is
really a quite nebulous statement (if not completely misleading- I shall
leave that determination to the experts). To me, it seems that less
emphasis on p values per se and more emphasis on power and effect size
would increase the general quality and replicability of published data.
Chris
Sent via Deja.com http://www.deja.com/
Before you buy.
=================================================================
Instructions for joining and leaving this list and remarks about
the problem of INAPPROPRIATE MESSAGES are available at
http://jse.stat.ncsu.edu/
=================================================================