Re: [ECOLOG-L] bibliometrics

2015-09-01 Thread Malcolm McCallum
I think we should just change the nomenclature.

Instead of calling them "highly ranked" journals or other similar names, we
should call them by what they really are "broad-interest journals."  or
something similar.

The original rational for citation ratings in particular was for librarians
to choose journals that were of interest to the most readers, so they could
spend funds more efficiently.

Over time, the concept of them being of broad interest versus specialized
and of interest to a small sector.

In fact, the journas with the broadest interest, are of course Science,
Nature and PNAS.  They are clearly publishing very important work too.
However, at least in the case of Science and Nature, interest of a broad
audience is more critical than the level of importance assigned to an
article.  Many landmark papers do not land in Science and Nature.  PNAS is
a different kind of journal of course.

However, this concept of broad interest versus high importance got garbled.


At one point, Immunological Reviews had the largest JIF of all journals
rated by Thomson-Reuters.  By that assessment, we should all dump our
reserach and start writing review articles about immunology because that
journal had a higher ranking than Science or Nature.

Currently, we infer that the most important articles are published in
Science and Nature, but this is with the caveat that roughly 10% of all the
papers in those two journals are responsible for roughly 90% of all the
citations.  In 2007, the last time I looked, roughly a third of the papers
in Science had not been cited a single time.  There is also the caveat that
the impact rating is also a better predictor of whether a paper will get
retracted than it is a predictor of getting cited.  IN fact, it is a VERY
VERY poor indicator of likelihood of getting cited.  Still, those that do
get cited, tend to get cited a ton.  THis makes sense because of the goal
to serve a BROAD AUDIENCE and to publish THE MOST IMPORTANT Papers.  Often,
the most important papers within a field, however, are not of broad
interest to the general population of scientists, let alone the general
public.

Which brings us to my suggestion.

Papers get cited because the attract attention, whether that be for good or
bad reasons, although it is usually good reasons.  I had a Chair who had an
endowed position at a medical school.  He had been publishing all of his
papers in 1-2 journals for years, specific to his field.  They were VERY
IMPORTANT.  He had previously published in Science, nature, as well as
PNAS.  He told me he quit that game because he was more interested in
getting material out than playing popularity contensts.  This guy had a
damn good record, tons of grants, etc.  IN my personal opinion, that is
what our goal in research should be in regard to publishing.  It should be
to publish the material where the right people will see it, not to publish
it where we get some kind of popularity contest won.  Its nice to be
popular, but it seems like striving for popularity is almost always a
street we have seen before in highschool.  People who are popular early,
but wane into the sunset as they get older.  A few stay popular for their
life, adn a few of those actually do stuff that matters.  However, a ton of
less popular people end up more popular as the age, lead happier lives, and
frankly make a larger contribution to society by not seeking to be popular.


I am not suggesting anyone should not strive to get in science, nature or
PNAS.  There is a difference between what SHOULD BE and WHAT IS.  YOu have
to work within the bounds of reality, but that does not mean you should try
to contribute to reality in a meaningful way so that maybe it will function
more ideally.

If we abandon calling them high impact ratings, and start calling them
large impact ratings, the context is different.
Highly ranked vs. Broad interest
low ranked vs specialized field
Best vs. broadest readership
journals from high-impact fields vs journals from fields with many
researchers.

Another option is to devise quartile rankings within disciplines based on
impact ratings.
I think this is valid too.

For example, in herpetology, our journals line up roughly like this by
impact rating, last time I looked.  It is far from a complete list

Journal of Herpetology
Herpetological Journal
Herpetologica
Herpetological Conservation & Biology
Amphibia-Reptilia
Copeia (I think this has bounced back to the top 3).
Acta Herpetologica
J African Herpetology,
J SA Herpetology
Unrated: Herp Review, Herp Notes, Bull British HErp SOc, Bull of Maryland
HErp Soc, BUll of Chicago Herp Soc, Alytes, Amphib & Rept Cons.

PRetending this were a complete list, it would be easy to assess the field
by quartile ranking.
There are 16 journals listed.

Top 25%: JH, HJ, Herp,
Middle 50%: A&R, Copeia, HCB, AH, JAH, JSAH
Bottom 25%; HR, HN, BBHS, BMHS, BCHS, Alytes, ARC

This is sensible, and it certainly is more meaningful that messing around
with worrying a

[ECOLOG-L] bibliometrics

2015-09-01 Thread David Inouye
The Millennium Alliance for Humanity and Biosphere (MAHB) is a joint 
effort to create a platform to help global civil society address the 
interconnections among the greatest threats to human well-being: 
failure of ecosystem services, economic inequity, social injustice, 
hunger, epidemics, toxic chemicals, and loss of security to crime, 
terrorism and war, especially resource wars (veiled or not), to name a few.

http://mahb.stanford.edu/welcome/

Their weekly blog http://mahb.stanford.edu/category/blog/ includes 
topics that are probably relevant for many ECOLOG-L subscribers.  The 
most recent one is a discussion about potential negative effects of 
the growing emphasis on bibliometrics:


Our obsession with metrics is corrupting science. 
http://mahb.stanford.edu/blog/our-obsession-with-metrics/