I think Jonathan has identified the crux of the issue here- well-trained scientist do not rely on the opinions of others to determine which papers are valid and which are perhaps flawed. Critical thinking/reading is a primary goal of all graduate programs and is something we introduce undergraduates to in advanced courses. This takes extreme forms sometimes as I have seen journal club sessions where there is almost a competition among students for who can most effectively eviscerate the paper to display its defects. By the time our graduate students complete their training they are nearly intellectual piranhas ready to rip apart any paper or proposal that comes their way, and many a young scientist has built their career by deconstructing the work of their predecessors. This is both the strength and the horror of the peer review process - we send off our precious intellectual offspring (papers and proposals) with what we think is great hope and promise only to be shredded by the reviewers. Anyone who has participated in this process knows that it works very well most of the time, but as I said at the beginning, its all about individual assessments, and it is guaranteed that there will be disagreements over the value and validity of any individual paper. That said, I would only caution contributors to this list to take care with the use of words. Something like 'faith' to a scientist (will my PCR reaction work today or not) is very different than the use of this word in a religious context. Some who read these posts may try to use these exchanges to support personal views that the writer never intended - to inappropriately support an a view that 'faith' is intrinsic to science, hence raising the validity of science alternatives. Words like these are loaded with a variety of meanings, so I would advocate sticking to a scientific vernacular for writings that are posted to this list.

Mitch

Jonathan Greenberg wrote:
Martin:

I certainly hope most scientists don't rely on "faith" in the peer review process to determine if a paper is valid or not. I've always treated peer-review as just setting a low-end of reliability -- e.g. the paper isn't AWFUL if it made it into this journal, and is at least worthy of me reading it -- the better the journal, typically, the higher the bar, but no journal comes close to being infallible. If you've reviewed for mid to upper tier journals, you'll know that the vast majority of submissions are terrible -- we throw out a LOT of bad research. Since science requires repeatability of results, if a paper is absolutely novel and brand new, I will ALWAYS spend a LOT more time reading through it than if its basically confirming what a lot of other papers have confirmed -- peer review + repetition of results = higher reliability. Personally, I disagree with the statement "The problem is that no individual has enough time, knowledge, and background to know if the scientific method is being properly by all those who claim to be doing so." If you are citing a paper or using a paper to guide your own research, as a scientist you should be reading the paper carefully enough to decide whether or not it is scientifically grounded -- if you are just pulling out "facts" from the abstract and discussion, you aren't really doing your job. This type of behavior WILL catch up with you, eventually -- if you are basing your own research on an assumption of validity of someone else's work simply because that work made it into a journal, and that work proves to be in error, you are essentially shooting yourself in the foot down the road.
--j

Martin Meiss wrote:
      I find this exchange very interesting, and it points up a major
problem caused by the burgeoning of scientific knowledge and the limitations
of the individual.  As scientists, we believe (have faith) that the
scientific method is the best means of arriving at truth about the natural world. Even if the method is error-prone in some ways, and is subject to
various forms of manipulation, it is historically self-correcting.
       The problem is that no individual has enough time, knowledge, and
background to know if the scientific method is being properly by all those who claim to be doing so. We hear someone cite a suspicious-sounding fact
(i.e., a fact that doesn't correspond to our perhaps-erroneous
understanding), and we want to know if it is based on real science or
pseudo-science.  So what to we do?  We ask if the supporting research
appeared in a peer-reviewed journal (i.e., has this been vetted by the
old-boys network?). This sounds a little like the response of the people
who first heard the teachings of Jesus.  They didn't ask "How do we know
this is true?"  They asked "By whose authority do you speak?"
These two questions should never be confused, yet the questions "Did
it appear in a peer-reviewed journal" and "Is that journal REALLY a
peer-reviewed journal?" skate perilously close to this confusion. We are looking for a short-cut, for something we can trust so we don't have to be
experts in every branch of science and read every journal ourselves.  I
don't know the answer to this dilemma, and perhaps there is none, but we
should be looking for something better than "Does this have the stamp of
approval of people who think like I do?" We should be looking for something
that is not just an encodement of "Does this violate the doctrine of my
faith?" The pragmatic necessity of letting others decide whether certain research is valid should be no excuse for relaxing our personal vigilance
and skepticism. Otherwise, we fall into the same trap that ensnares the
religionists who are trying to undermine science because it threatens their
faith.

                 Martin M. Meiss


2009/7/8 Kerry Griffis-Kyle <kerr...@yahoo.com>

I am teaching a Sophomore/Junior level evolution course at Texas Tech
(where a significant proportion of my students believe evolution is
anti-God). One of the activities I have them do is take three creationist claims about science and use the peer-reviewed scientific literature to find evidence to support or refute the claim. It makes them really think about the issues; and if they follow the directions, it does a better job than any of my classroom activities convincing them that the claims against evolution
are just a bunch of hooey.  Unfortunately, there are journals claiming
peer-review status that are not.  It can be very frustrating.

Like Raphael, I also wonder if there is a good source the students can use
as a rubric for telling if a journal article is peer-reviewed.

*****************************
Kerry Griffis-Kyle
Assistant Professor
Department of Natural Resources Management
Texas Tech University

--- On Tue, 7/7/09, Raphael Mazor <rapha...@sccwrp.org> wrote:


From: Raphael Mazor <rapha...@sccwrp.org>
Subject: [ECOLOG-L] "real" versus "fake" peer-reviewed journals
To: ECOLOG-L@LISTSERV.UMD.EDU
Date: Tuesday, July 7, 2009, 5:03 PM


I've noticed a number of cases lately where groups with a strong political
agenda (on topics like climate change, evolution, stem cells, or human
health) cite "peer reviewed" studies in journals that are essentially
fabricated for the purpose of advancing a specific viewpoint.

What's a good way to tell when a journal is baloney? Of course, it's easy for a scientist in his or her own field to know when a journal is a sham, but how can we let others know it's obviously fake? For example, are only
"real" journals included on major abstract indexing services?

-- <><><><><><><><><>
Raphael D. Mazor
Biologist
Southern California Coastal Water Research Project
3535 Harbor Boulevard, Suite 110
Costa Mesa, CA 92626

Tel: 714-755-3235
Fax: 714-755-3299
Email: rapha...@sccwrp.org







--
Mitchell B. Cruzan, Associate Professor
Department of Biology
P.O. Box 751
Portland State University
Portland, OR  97207

http://web.pdx.edu/~cruzan/

Reply via email to