On Thu, 11 Apr 2013 19:50:14 -0700, Jim Clark wrote:
Hi
[snip]
Mike P correctly points out how a simulation differs from reality,
but perhaps misses my point.
Sorry to disagree but I don't miss the point. I suggest that you read
my review of Geoff Cumming's "Understanding the New Statistics"
on PsycCritiques where I take him to task for using a similar argument.
Here's a link to Cumming's book on Amazon:
http://www.amazon.com/Understanding-The-New-Statistics-Meta-Analysis/dp/041587968X/ref=sr_1_1?ie=UTF8&qid=1365765004&sr=8-1&keywords=the+new+statistics
You can access PsycCritiques through your institution's library
(the review's title is "New Statistical Rituals for Old").
Consider Leo Dicara's research and if a meta-analysis were done of
his research on operant conditioning of the autonomic nervous system.
After the initial positive findings, replications fail and stop being done.
But, net-net, there will be some non-zero effect size because (a) of the
early effects and (b) the overall large sample size. But Dworkin and
Miller show the problems in this:
Dworkin, B. R., & Miller, N. E. (1986). Failure to replicate visceral
learning in the acute curarized rat preparation. Behavioral neuroscience,
100(3), 299.
Add to this that no one has be able to replicate the research after
Dworkin & Miller were published.
And then I'd suggest taking a look at Francis' treatment of Bem's
PSI data and Schooler's verbal conditioning research which provide
evidence of the "decline effect";
Francis, G. (2012). Too good to be true: Publication bias in two
prominent studies from experimental psychology. Psychonomic
Bulletin & Review, 19(2), 151-156.
A copy can be obtained here:
personal.stevens.edu/~ysakamot/726/paper/Research/PublicationBias.pdf
Imagine, for example, that we are interested in fmri results
for some rare condition versus the general population. I don't know what
fmri
research costs in the US or other countries, but it can be very expensive
in
Canada. Might a researcher be able to manage 10 subjects, but not 90 or
250?
Or if the condition is particularly rare, how long would it take to get 10,
90,
250, or whatever number of participants? For me, I would like to see a
venue
for multiple researchers who are only able to manage 10 participants
because of
$ and/or time constraints to publish their results in a way that would
later
allow these results to be aggregated with other similarly restricted
studies.
There are, however, dangers (e.g., exaggerated reports in media), as I
noted.
You consistently avoid the issue of:
(1) Making a firm decision of what effect size a researcher thinks is
present and whether it is best to view it as a fixed effect or a random
effect.
(2) Doing an a priori power analysis in order to determine what
is the probability of detecting an effect (i.e., prob of reject a false null
hypothesis). If statistical power is less than .50, I think that it is
unethical
to allow such research to be done -- who wants to do research where
the probability of making a Type II error is greater than 50%. In the
course of doing an a priori power analysis, one can determine what the
number of subjects/participants one will need to detect the effect size
one has specified (and the total sample will probably some more people
in order to take into account subject loss due to attrition, errors made in
procedure, acts of God, etc.).
If you can't get enough subjects for an acceptable level of power
(e.g., power= .80, which I consider to be low because it means that
there is a 20% chance of committing a Type II error, a rate 4 times
that of making a Type I error -- this makes clear a researcher's bias
and costs associated with making errors), one shouldn't do the study.
One might consider doing a pilot study to get an estimate of the
effect size that one might obtain and if it is too small to be detected
given your resources, do a qualitative study instead.
Neuroimaging studies are expensive all over the place and it is very bad
practice to use them in studies where it is almost impossible to detect
a false null hypothesis. They should be used only in studies where firm
conclusions can be reached (i..e., high powered, properly conducted
studies). This is a waste of precious resources and this is the type
of practice the Button et al complain about.
Using low power studies and then meta-analyzing them may
result in one detecting systematic errors and bias unrelated to the
phenomenon
being study (i.e., the "tweaking" that researchers do to get statistically
significant results). Meta-analyze Dicara's published studies and tell me
what is the mean effect is that one obtains. After you do so, I'll explain
why it's wrong.
-Mike Palij
New York University
[email protected]
---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here:
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=24967
or send a blank email to
leave-24967-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu