On Wed, 21 Sep 2016 05:27:46 -0700, William Scott wrote:
Reviewing Results-Free Manuscripts

An open-access journal is trialing a peer-review process in
which reviewers do not have access to the results or discussion
sections of submitted papers.

http://www.the-scientist.com/?articles.view/articleNo/47081/title/Reviewing-Results-Free-Manuscripts/

It could be just me (it probably *is* just me) but I think the people
conducting this "experiment" are making some fallacious
assumptions. Let me make a couple of points:

(1) The article linked to above ends with a quote from one of  the
researcher involved and I reproduce the last sentence of the
quote:

|We believe that this could help reduce publication bias by
|basing the decision to publish purely on the scientific rigor
|of the study design.

First, I think the sentence should end with "scientific rigor of
the *reported* study design".  Why? Because we have no
guarantee that the reported design -- for that matter, the
entire method section -- is being reported honestly and
accurately.  On the honor system we use, we assume that
the method section provides a reliable description of the
participants, materials, setting, design, and procedures
used.  The so-called "replication crisis", however, casts
doubt that this is actually the case.  Researchers have learned
to write methods sections that emphasize what they think
will help their manuscript get published, so, complications
that arise during the conduct of the research (e.g., problems
getting participants, difficulty with materials, variations in
procedure and setting, etc.) may be omitted or minimized,
thus making exact replications difficult to conduct.  Again,
the classic example of this is the Leo DiCara research that
showed operant conditioning of the autonomic nervous
system, a result that demonstrated a decline effect (i.e.,
the results when from significant to nonsignificant, never
to be significant again).  When Miller and Dworkin attempted
to replicate the results, they were unable to (and the original
research had been done in Miller's lab if memory serves).
Miller and Dworkin concluded that the original effect was
"real" but that there was something about procedure that
was not reported that produced the significant results and
that they were unable to determine what that factor/variable
was (this is the charitable interpretation of the DiCara
research).

Unless the methods used are fundamentally flawed -- it is
unclear to me how qualified reviewers are to evaluate an
experimental design unless this is their area of expertise --
I don't think the results will mean much.  Since the manuscripts
to be used are based on actual research, we don't know if
the research is actually valid or not -- replications would
help sort that out.  It would have been better to create, say,
10 manuscripts, half of which had specific defects, half that
are free from defect starting with the statement of the
research hypothesis, through the method section, through
the results section, and finally through the discussion
section.  This become something like a signal detection
task and goal is to determine how well reviewer can discriminate
deliberately flawed studies from non-flawed studies.

(2)  Unstated in the Scientist article are the reasons why the
results and discussion sections are omitted.  The statement
that the article is based on and is obliquely cited (see:
http://phys.org/news/2016-09-results-free-peer-review-academic-publishing.html )
makes clear that the real interest is in whether the finding of
statistically significant results or not affects the judgment of
the reviewer.  Quoting from the statement:

|Dr Button said: "The current system favors publication bias
|because significant results are seen as more important to the
|scientific record by publishers, academics and the systems
|in place to measure their performance. This new trial should
|at least begin to address one area where publication bias arises."

There are obvious problems with this research:
(a) It assume that negative results are NEVER published, even
replication attempts.  The unanswered question is whether the
journal involved ("BMC Psychology") ever publishes negative
results?  If this is true, it seems unlikely that the problem is with
the reviewers and more likely to stem from editors and publishers.
I'm not familiar with the journal in question but research with
negative results, especially in the biomedical field *DO* get
published -- I know because I am co-author one. See:
Handelsman, L., Rosenblum, A., Palij, M., Magura, S., Foote, J.,
Lovejoy, M., & Stimmel, B. (1997). Bromocriptine for cocaine
dependence. The American Journal on Addictions, 6(1), 54-64.
http://onlinelibrary.wiley.com/doi/10.1111/j.1521-0391.1997.tb00392.x/full

(b) Again, since actual research manuscripts are to be used, we
don't know which results will be replicated, significant or not.
The use of constructed manuscripts would make clearer whether
significance of results affect the decision of a reviewer (make
have the papers have significant results, half nonsignificant,
then use a 2x2 factorial design for creating papers with
design issues [valid vs invalid] and results [significant vs nonsign]).

So, is the research idea as stated in either the Scientist article
or the statement for BMC Psychology interesting?  I don't know.
Why? Because one would have to have something like a research
proposal explaining the background, why certain hypotheses were
selected, why the procedure was used (i.e., actual manuscripts vs
specially constructed manuscripts that systematically the variables
that presumably affect reviewers' judgment), what results would
be expected (and why) and the implications of the results.

But, hey, what do I know, right?

-Mike Palij
New York University
m...@nyu.edu








---
You are currently subscribed to tips as: arch...@mail-archive.com.
To unsubscribe click here: 
http://fsulist.frostburg.edu/u?id=13090.68da6e6e5325aa33287ff385b70df5d5&n=T&l=tips&o=49497
or send a blank email to 
leave-49497-13090.68da6e6e5325aa33287ff385b70df...@fsulist.frostburg.edu

Reply via email to