DAP:
I suspect there are many different aspects to this question. Here is
part of it. Often something will be statistically significant (p<.05) but
its practical effect small. e.g. first born children have higher IQ than
second borns (p<.05) but the practical consequence is small because it is
only a few IQ points. For this reason alot of people now report
significance levels as well as effect sizes (r). If we report p<.001 along
with r = .3, that's a significant effect as well as a big effect. But,
p<.001 along with r = .03 would be a small, but significant, effect.
But there is more to the story because sometimes even small effect
sizes (r) can have great practical significance - depending on the
research context. Here is data from the _New England Journal of Medicine_,
1988, 318, 262-264 ("Preliminary Report: Findings from the Component of
the Ongoing Physicans' Health Study Research Group").
No Heart Heart Total
Attack Attack
Aspirin 10,933 104 11037
Placebo 10,845 189 11034
----------------------------------------
Total 21,778 293 22,071
Chi square shows a significant effect on aspirin on lower chance of heart
attack (p = .0000006) but, the effect size is r = .034, considered very
small by conventional standards in psychology.
But, the practical significance of the aspirin study is large. The way
epidemiologists report findings is to figure "risk". In the placebo
condition, the risk of heart attack is (189/11034)*100 = 1.7%. In the
aspirin condition, it is (104/11037)*100 = .9%. The _relative risk_ is the
ratio of the two; 1.7% is nearly twice .9%. That is, heart attacks are cut
in half - large practical significance, but small effect size.
I am familiar with this aspirin data through Rosnow and Rosenthal's
undergraduate text _Beginning Behavioral Research_ third edition, 1999
(they advocate for binomial effect-size displays, or BESD, in these cases.
I'm don't like them so I did not discuss them here). Information on risk
and relative risk is usually found in medical or epidemiological sources.
The risk approach is better for capturing practical significance when base
rates (here, risk of heart attack) is low.
As far as I know, you are correct about statistical significance. If you
don't have statistical significance in the first place, you _usually_
can't discuss effect sizes or practical significance (I'm covering my
bases with the "usually"!).
Say, isn't it time we revived our discussion about how awkward the term
"significance" is for p statements? For the "n th" time, wouldn't
_reliability_ be the better word? If p < .05, we conclude the results of
the study will repeat if the experiment was replicated. That, in my book,
is the definition of reliability. This was we can dispense, once and for
all, with adjectives before the word "significance."
"DAP Louw (Sielkunde)" wrote:
> Tipsters
>
> Can someone please refer my to a source that will shed some light
> on what precisely the "difference" between clinical and statistical
> significance is. One often hears somebody saying: "Yes, there is
> no statistical significance, but there is definitely a clinical
> significance." To me statistical insignificance simply means that
> any difference should be attruted to chance factors --- and therefore
> is not significant in any other meaingful way. But since I've heard
> psychologists in various countries using "clinical significance" I'm
> starting to worry that I'm missing something here.
--
---------------------------------------------------------------
John W. Kulig [EMAIL PROTECTED]
Department of Psychology http://oz.plymouth.edu/~kulig
Plymouth State College tel: (603) 535-2468
Plymouth NH USA 03264 fax: (603) 535-2412
---------------------------------------------------------------
"What a man often sees he does not wonder at, although he knows
not why it happens; if something occurs which he has not seen before,
he thinks it is a marvel" - Cicero.