----- Original Message ----- 
On Thu, 13 Sep 2007, Shearon, Tim wrote:
> Mike- Your points are well taken. On the other hand, you said, " as
> students of S.S. Stevens would know, we rely upon an internal scale of
> magnitude which allows us to arbitrarily assign different degrees of
> magnitude to different discrete entities which may embody different
> qualitative properties." I think you can see that there are multiple
> things mitigating press presenting it accurately. If they presented this
> carefully and accurately they'd have neither time nor reader attention
> for important issues like Britney's rear or who Nicole is dating. :)

One thing to remember is that the NY Times is considered
the U.S.' "paper of record", which implies that one might be
able to rely upon it as a somewhat reliable reporter of information,
beliefs, and knowledge in popular terms (as well as psychological
myths such as Freud's use of the iceberg metaphor).  I can appreciate the
difficulty of presenting some technical issues such as the Stevens'
theory but newspapers *seem* to do a credible job when it comes
to writing about topics in physics such as black hioles, virutal particles,
"spooky action at a distance", and GUT (Grand Unified Theories).
Then again, I'm not a physicist and it is possible that when physicists
read these articles they too feel frustrated by the lack of detail and
accuracy presented.

> Seriously, I really do understand your frustration at the low level of
> presentation (remembering that they aim this stuff at about a 10th
> grader and they aren't talking about gifted 10th graders!). As someone
> interested in the neurosciences I can also state that the information
> presented in that piece is pretty shallow and reflects extremely poor
> understanding of what was done.

It would be nice if some talented scientist(s) started a blog called
something like "Scientific Accuracy in the Popular Media" which
reviews the scientific accuracy of science-related stories in the media.
I assume that if they were still alive, Stephen Gould and/or Carl
Sagan's might have done this.  I wait for one of their intellectual
offspring to come to this chore.

I also hope that such a blogger would "agitate" for popular
media to follow certain basic guidelines such as (a) NEVER use
"some scientists say", instead give the name of specific researchers
so that an interested reader (or our students) can check PsycInfo,
PubMed, and other article databases for their publications so we can
see what they wrote and actually said, (b)  provide references
to the research that is being presented, and (c) minimize quotes from
researchers about research, issues, and other aspects of the topic
being presented.  This may make for popular writing but it also
makes for bad scholarship.

Let me expand upon the issue of using quotes because I've thought
about this issue for a while and it seems to me to raise an important
difference between the scientific and popular presentation of scientific
information.

If one relies upon an article or paper that has undergone peer review,
I believe that at the very least some checking of the validity of statements
has been made along with some evaluation of the accuracy and
"coverage" (the limitations of how general results or conclusions are).
However, if a reporter simply asks a researcher to provide a
statement or an explanation, there is no guarantee that the reporter
or anyone else in the process will be competent enough to catch
errors or other problems.  The statement will be published and
researchers familiar with the material will wonder "did so-and-so
really say that or did the reporter record the statement incorrectly?".

One example of the last point involves a passage in a recent pop
neuroscience book by a science writer that I'm reviewing.  The
author asks a neuroscientist about some aspects of working memory
and the neuroscientist is quoted as saying to effect "the capacity of
working memory is very limited, on the order of 4 to 7 BITS".
Anyone with familiarity with George Miller's "Magical Number 7"
paper will find this bizarre because in it Miller argued (a) bits
in the information theory sense might describe some aspects of
attentional/perceptual performance and (b) could NOT be used
to measure the capacity of immediate (working/short-term) memroy
because its capacity could be increased through recoding or
chunking -- thus, working memory may contain 7 +/- 2 CHUNKS
each of which can vary in the number of bits they contains.

So, did the neuroscientist mis-speak?  Did the science writer
get it wrong?  Did an editor change "chunks" to "bits" because
the general reader might have more of an "intuitive feel" for bits
instead of "chunks"?  Or are there large parts of the scientific
community that actually think that Miller said "bits" instead of
chunks (the book's author thanks many scientists for contributing
and reading parts of the book)?  This point and others on
cognitive psych in the book had me wondering who was doing
the "fact checking" -- it also made me wonder about the accuracy
of the reported neuroscience I was taking "on faith" simply because
it's outside of my own area of expertese but which might represent
as big a blunder as failing to distinguish chunks from bits.

> But, all things considered, I have to
> point out that I got more questions from that piece (both in and out of
> class) than most things in the textbook. Most folks (especially
> colleagues!) were surprised to find how much the article left out -
> several have actually asked (gasp!) for further readings and the student
> newspaper wants a piece in response with more detail (Plus I've received
> two requests for comment from local papers- now talking to the press-
> there is an minefield!).

As I mention above, tell local papers (a) always identify who the
researcher is who is being quoted and (b) allow the publication of
references (APA style) for research that is referred to.

> All in all it seems to have generated a bunch
> of "teachable moments" - that has helped to temper my frustrations
> somewhat. :) As to your points on correlation being confused with cause
> etc., I couldn't agree more (or with how frustrating that is). One of
> our emeritus professors is known to often use the phrase, "They are
> grossly over trained and grossly under educated". Seems to ring true.
> Not that this makes it much less frustrating!

Let me ask one final question because I have recently started to be
bothered by it.  When an article get published in an APA journal or
a Psychonomics Journal or other "traditional" journals, I have some
sense of the degree of review an article receives before it is published.
I am less clear on the review process for journals such as Nature
Neuroscience but my specific question concerns the journal "Proceedings
of the National Academy of Sciences" (PNAS) because my informal
examination of cites to this journal seems to indicate that "high
citation" articles in neuroscience have gotten published there.

I recognize that this is supposed to be high prestege journal because
it is published by the NAS.  However, is my memory correct in that
this is also a pay-per-page journal in which members of the NAS
can publish and -- this is the critical point -- without undergoing
peer review?  A similar arrangement existed for the old psychonomics
journal "Bulletin of the Psychonomic Society" where members of the
society could publish short articles (1-4 pages) for free and without
peer review (Disclosure:  I published there, see Palij, Leveine, &
Kahan 1984) but the publications were of "uneven quality" even
though some well-known experimental psychologists had published
there (one of my favoriates has Beth Loftus as a co-author and a
title "How Deep Is the Meaning of Life" using a levels of processing
manipulation)..

So, does PNAS have peer review prior to publication or is it simply
pay per page journal ?

-Mike Palij
New York University
[EMAIL PROTECTED]






---

Reply via email to