On 16 Mar 2002 08:11:26 -0800, [EMAIL PROTECTED] (wuzzy) wrote:

> The highest form of research is that which finds a significant
> treatment effect in vivo.
> In my experience, though, It seems that small effect sizes are almost
> the rule in natural environments.  Single variables might often
> account only for 3%-5% or less of observation: probably because there
> are so many variables operating as well as many of them being
> transient/saltatory/unmeasurable.
> 

You have been bamboozled!
Where did you get your notion of 'effect sizes'?  
What *is*  your notion of effect sizes? - you seem to 
have internalized Cohen's rhetoric, which is suitable
to his audience: social scientists designing experiments
with 20-100 subjects.  

Bigger 'effects' -- There are good scientists in labs
who never do more than 3 or 5 replications (say) because
everything fuzzier is too dubious.  

Much smaller 'effects -- There are nuclear physicists who 
now scan the tracks of millions of collisions in order to 
compare a handful of special events.

Later, you mention r.   The Pearson correlation is useful 
for describing things with high correlations.  
It is *not*  especially useful for low correlations.  
It is especially *bad*  for describing rare occurrences.

Epidemiologists use "odds ratios".  For instance, evidence
of correlation of a cause with a disease might be given
by an odds ratio of 10, whereas the corresponding 
Pearson  r  is, say,  0.01.   That takes a big N  to be
'significant', of course.  Across a certain range of 
prevalences, you can have the same Odds Ratio,
and the r  will increase directly with the rates of disease:
and that is why  r   is properly disregarded by *them*.

"Small effects"  in that traditional sense of r  and
percent of variance  is why a number of epidemiological
studies decide to enrol ten thousand or more subjects
and follow them for years.


> At the same time there is a trend in stats to move away from p-values
> to just saying (as dichotomy) "significant" "or not", and quoting a
> confidence interval..

 - actually, the trend is *away from*  the simple *test*  that said
significant versus not, and toward giving the exact p-value
since that is more informative.   

The CI is an alternative presentation of the exact p-value, 
you may note, which happens to provide more detail:  the 
two can be computed from each other, but you also need
to have the mean and SD along with the (suitably exact) 
p-value.


> 
> Anyway, my question is one that I have asked before:  are you able to
> draw conclusions from research based on such small effect sizes>  an r
> of +0.08 that is significant.  I have found alot of regulatory
> mechanisms operate with effect sizes that are this small: ex. there
> may be a feedback mechanism that is well established yet the effect
> size is only 0.08
> 
> any articles or books on this topic appreciated..

Look to the books on research or experiments or 
experimental design in your own area?
-- 
Rich Ulrich, [EMAIL PROTECTED]
http://www.pitt.edu/~wpilib/index.html
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to