Mike's hit his three-post limit, so he's asked me to post this by proxy
for him. :)

--David Epstein

Mike Palij <[email protected]> says....
--------------------------------------------
RE: [tips] Frequency of Type I errors in published research.

On Mon, 02 Apr 2012 09:29:55 -0700, Karl L Wuensch wrote:
>        Should I send him this link:
>http://core.ecu.edu/psyc/wuenschk/StatHelp/Type1.htm  ??

You could send Medin the link but he might send back this abstract from
Rosenthal (1979):

|For any given research area, one cannot tell how many studies have |been
conducted but never reported. The extreme view of the "file
|drawer problem" is that journals are filled with the 5% of the studies
|that show Type I errors, while the file drawers are filled with the 95%
|of the studies that show nonsignificant results. Quantitative procedures
|for computing the tolerance for filed and future null results are
reported
|and illustrated, and the implications are discussed.
Rosenthal, R. (1979). The "File Drawer Problem" and tolerance for
null results.  Psychological Bulletin, 86(3), 638-641.

I don't know if Medin is relying upon this a source, but it is possible.
As pointed out previously, the 5% number is a worst case number but,
unfortunately, Rosenthal was an optimist.  The example you provide on your
website by Gasparikove-Krasnec & Ging (1987) is fine as
a pedagogical tool but it is unrealistic because the number of unpublished
papers can never be known with certainty.  Jeffrey Scargle (2000) may have
presented the problem best and I provide the abstract from his paper:

|Abstract-Publication bias arises whenever the probability that a
|study is published depends on the statistical significance of its results.
|This bias, often called the file-drawer effect because the unpublished
|results are imagined to be tucked away in researchers' file cabinets, |is
a potentially severe impediment to combining the statistical results |of
studies collected from the literature. With almost any reasonable
|quantitative model for publication bias, only a small number of studies
|lost in the file drawer will produce a significant bias. This result 
contradicts
|the well-known fail-safe file-drawer (FSFD) method for setting limits |on
the potential harm of publication bias, widely used in social, medical,
|and psychic research. This method incorrectly treats the file drawer as
|unbiased and almost always misestimates the seriousness of publication
|bias. A large body of not only psychic research, but medical and social
|science studies as well, has mistakenly relied on this method to
|validate claimed discoveries. Statistical combination can be trusted only
|if it is known with certainty that all studies that have been carried out

are
|included. Such certainty is virtually impossible to achieve in literature

surveys.

Scargle, K. (2000). Publication Bias: The "File-Drawer" Problem in
Scientific Inference. Journal of Scientific Exploration, 14(1), 91-106.

NOTE:  I am familiar with the Journal of Scientific Exploration and
realize the problem of relying upon an article there; nonetheless, Scargle
has been cited in more "reputable" journals.

The key point behind Scargle's analysis is that Rosenthal's solution that
might keep the false positive rate no greater than 5% fails because,
unlike the examples used by Gasparikove-Krasnec & Ging, the total
number of relevant study results cannot be known and what is known is
likely to be biased.

To make things worse, the article that Medin was referring to (by
Simmons, Nelson, & Simonshohn 2011) shows that researchers
inflate the false positive rate way beyond the 5% level through a
number of devices.  This quote from their article, I think, make the point:

|In this article, we show that despite the nominal endorsement
|of a maximum false-positive rate of 5% (i.e., p ? .05),
|current standards for disclosing details of data collection and
|analyses make false positives vastly more likely. In fact, it is
|unacceptably easy to publish "statistically significant" evidence
|consistent with any hypothesis. (p1359)

I could be wrong but it might be more worthwhile reviewing the
recommendations that Scargle and Simmons et al make than
talking about Medin was right or wrong.  But what do I know,
right?

-Mike Palij
New York University
[email protected]





---
You are currently subscribed to tips as: [email protected].
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=17096
or send a blank email to 
leave-17096-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to