My tone in response to Martin's comment was more argumentative than I'd
intended.  I think his point is a good one, if I understand it correctly.
With peer review, there's a real risk of group-think, where evidence for a
point of view accumulates merely because most people in a given field share
that point of view, and they are harder on papers that undermine their
perspective than on those that support it.  (Here, for example, I admit that
I'm more critical of Paul Cherubini's posts than I am of posts with which I
generally agree.)

I can think of three checks that we have in place to mitigate the
group-think phenomenon.  (1) As others have said, we all must read
skeptically and not have blind faith in peer reviewers, and I think most of
us are trained to do so.  (2) We are trained not put too much weight on a
single study.  The test of repeatability has exposed mistakes and frauds in
the past, so holding ideas to that standard can help us avoid jumping on
wrong-headed bandwagons.  (3) You can boost your reputation considerably by
shooting down an accepted hypothesis, so there's always a strong incentive
to challenge widely-held assumptions.

Over all, the scientific system really seems to work quite a bit better than
any other known method of acquiring accurate knowledge about objective
reality.  We've moved beyond W-waves and phlogiston, so we must be doing
something right.  All of which, I'm sorry to say, does little to address the
original question, but I think it goes some way toward explaining why we can
have more confidence in the science of a quantum physicist or a fisheries
biologist than in the magic of a shaman.

Jim Crants


On Wed, Jul 8, 2009 at 4:27 PM, Bill Silvert <[email protected]> wrote:

> I support Martin in this, although I think that James raises a valid point.
> Peer review is only a poor indicator of the quality of a paper, and often
> editors end up sending papers to graduate students or even people in other
> fields. About a third of the reviewing requests I receive are inappropriate,
> and often I can't even understand what the paper is about.
>
> Of course this depends on the particular discipline. In fields where there
> is a standard methodology peer review can certify that the work was done
> correctly. In other fields though the reviewer may only be certifying that
> the paper follows the current paradigm (note the quote from Hilborn in
> another posting on this topic).
>
> Basically we have no definitive way of separating valid results from junk.
> I am sure that there were plenty of senior scientists who would have
> rejected the papers of Darwin, Einstein, Wegener and many others. There are
> also hundreds of papers published in good journals which turned out to be
> wrong.
>
> The suggestion that you look at the journal's mission statement may help.
> Reputable journals abound, the problem arises with obscure new journals that
> may have an agenda. (Certainly no respectable scientist would want to
> publish a complicated model in the online Journal of Simple Systems,
> www.simple.cafeperal.eu - I can say this with confidence, since I am the
> editor and publisher). If the journal seems strange or inappropriate, think
> about why the paper ended up there,
>
> Bill Silvert
>
> ----- Original Message ----- From: "James Crants" <[email protected]>
> To: <[email protected]>
> Sent: Wednesday, July 08, 2009 3:22 PM
> Subject: Re: [ECOLOG-L] "real" versus "fake" peer-reviewed journals
>
>
>
> Martin,
>>
>> This all sounds good in the abstract, but it's beyond me how we could do
>> better than peer-review to establish which science is done well and which
>> is
>> not.  No matter how reliable a system is, it's always easy to say "we
>> should
>> do better than this."  But what would you propose to improve on our
>> current
>> system of vetting scientific research?
>>
>> You don't have to get very far from your own field to run into research
>> you
>> aren't equipped to validate.  Most pollination biologists probably aren't
>> prepared to properly assess the quality of research on insect cognition,
>> for
>> example, so they have to rely on other scientists to evaluate the research
>> for them.  To what better authority could they possibly appeal?
>>
>> I would certainly not want people who don't "have faith" in the scientific
>> method deciding which papers can and cannot be published.
>>
>> Jim Crants
>>
>> On Wed, Jul 8, 2009 at 10:34 AM, Martin Meiss <[email protected]> wrote:
>>
>>     I find this exchange very interesting, and it points up a major
>>> problem caused by the burgeoning of scientific knowledge and the
>>> limitations
>>> of the individual.  As scientists, we believe (have faith) that the
>>> scientific method is the best means of arriving at truth about the
>>> natural
>>> world.  Even if the method is error-prone in some ways, and is subject to
>>> various forms of manipulation, it is historically self-correcting.
>>>      The problem is that no individual has enough time, knowledge, and
>>> background to know if the scientific method is being properly by all
>>> those
>>> who claim to be doing so. We hear someone cite a suspicious-sounding fact
>>> (i.e., a fact that doesn't correspond to our perhaps-erroneous
>>> understanding), and we want to know if it is based on real science or
>>> pseudo-science.  So what to we do?  We ask if the supporting research
>>> appeared in a peer-reviewed journal (i.e., has this been vetted by the
>>> old-boys network?).  This sounds a little like the response of the people
>>> who first heard the teachings of Jesus.  They didn't ask "How do we know
>>> this is true?"  They asked "By whose authority do you speak?"
>>>       These two questions should never be confused, yet the questions
>>> "Did
>>> it appear in a peer-reviewed journal" and "Is that journal REALLY a
>>> peer-reviewed journal?" skate perilously close to this confusion.  We are
>>> looking for a short-cut, for something we can trust so we don't have to
>>> be
>>> experts in every branch of science and read every journal ourselves.  I
>>> don't know the answer to this dilemma, and perhaps there is none, but we
>>> should be looking for something better than "Does this have the stamp of
>>> approval of people who think like I do?"  We should be looking for
>>> something
>>> that is not just an encodement of "Does this violate the doctrine of my
>>> faith?"  The pragmatic necessity of letting others decide whether certain
>>> research is valid should be no excuse for relaxing our personal vigilance
>>> and skepticism. Otherwise, we fall into the same trap that ensnares the
>>> religionists who are trying to undermine science because it threatens
>>> their
>>> faith.
>>>
>>>                Martin M. Meiss
>>>
>>>
>>> 2009/7/8 Kerry Griffis-Kyle <[email protected]>
>>>
>>> > I am teaching a Sophomore/Junior level evolution course at Texas Tech
>>> > (where a significant proportion of my students believe evolution is
>>> > anti-God).  One of the activities I have them do is take three
>>> creationist
>>> > claims about science and use the peer-reviewed scientific literature to
>>> find
>>> > evidence to support or refute the claim.  It makes them really think
>>> about
>>> > the issues; and if they follow the directions, it does a better job >
>>> than
>>> any
>>> > of my classroom activities convincing them that the claims against
>>> evolution
>>> > are just a bunch of hooey.  Unfortunately, there are journals claiming
>>> > peer-review status that are not.  It can be very frustrating.
>>> >
>>> > Like Raphael, I also wonder if there is a good source the students can
>>> use
>>> > as a rubric for telling if a journal article is peer-reviewed.
>>> >
>>> > *****************************
>>> > Kerry Griffis-Kyle
>>> > Assistant Professor
>>> > Department of Natural Resources Management
>>> > Texas Tech University
>>> >
>>> > --- On Tue, 7/7/09, Raphael Mazor <[email protected]> wrote:
>>> >
>>> >
>>> > From: Raphael Mazor <[email protected]>
>>> > Subject: [ECOLOG-L] "real" versus "fake" peer-reviewed journals
>>> > To: [email protected]
>>> > Date: Tuesday, July 7, 2009, 5:03 PM
>>> >
>>> >
>>> > I've noticed a number of cases lately where groups with a strong
>>> political
>>> > agenda (on topics like climate change, evolution, stem cells, or human
>>> > health) cite "peer reviewed" studies in journals that are essentially
>>> > fabricated for the purpose of advancing a specific viewpoint.
>>> >
>>> > What's a good way to tell when a journal is baloney? Of course, it's >
>>> easy
>>> > for a scientist in his or her own field to know when a journal is a >
>>> sham,
>>> > but how can we let others know it's obviously fake? For example, are >
>>> only
>>> > "real" journals included on major abstract indexing services?
>>> >
>>> > -- <><><><><><><><><>
>>> > Raphael D. Mazor
>>> > Biologist
>>> > Southern California Coastal Water Research Project
>>> > 3535 Harbor Boulevard, Suite 110
>>> > Costa Mesa, CA 92626
>>> >
>>> > Tel: 714-755-3235
>>> > Fax: 714-755-3299
>>> > Email: [email protected]
>>> >
>>> >
>>> >
>>> >
>>> >
>>>
>>>
>>
>>
>
>
> --------------------------------------------------------------------------------
>
>
>
> Internal Virus Database is out of date.
> Checked by AVG - www.avg.com
> Version: 8.5.375 / Virus Database: 270.12.93/2206 - Release Date: 06/27/09
> 17:55:00
>

Reply via email to