> You have been bamboozled!
> Where did you get your notion of 'effect sizes'?  
> What *is*  your notion of effect sizes? - you seem to 
> have internalized Cohen's rhetoric, which is suitable
> to his audience: social scientists designing experiments
> with 20-100 subjects.  

Hehe I was about to ask about tests for small populations.  Cohen
mention's his "friend" who was interested in these..  I have never
come across these tests in literature..

> 
> Bigger 'effects' -- There are good scientists in labs
> who never do more than 3 or 5 replications (say) because
> everything fuzzier is too dubious.  


As far as I have seen, published results don't seem to be replicable..
Is there a test for replicability?(the p-value does not show
replicability)  For instance a signicant result might only show up in
30% of studies done in similar populations. [ex. compare the large
epidemiological studies done in the US]
Ex2. Meta-analysis "funnels" look more chaotic than the raw data..


One argument in this month's
    ("Statistical," "practical," and "clinical": How many kinds of
significance      do counselors need to consider? Thompson B  JOURNAL
OF COUNSELING AND           DEVELOPMENT  80 (1): 64-71 WIN 2002)
is that the significance of an event is not judged by its
"significance" as measured by p.  They cite the event of an asteroid
colliding against the earth has a very high p value but is extremely
significant!

Similarly as pointed out, small R^2 are the norm in research: I read
somewhere that an R^2 of 0.02 is weak whereas 0.25 is very high. I am
amazed at this as statistics book always give examples of R^2=0.99.
.
.
=================================================================
Instructions for joining and leaving this list, remarks about the
problem of INAPPROPRIATE MESSAGES, and archives are available at:
.                  http://jse.stat.ncsu.edu/                    .
=================================================================

Reply via email to