Le samedi 14 mai 2011 à 09:26 -0400, Stevan Harnad a écrit :

> On 2011-05-11, at 8:35 PM, <jean.claude.gue...@umontreal.ca> wrote:

> I said nothing about peer review, and I would also agree that peer
> review is indispensable. The "new form of judgement" that I allude to
> would be a form of peer review, but probably closer to jury review than
> to individual, isolated reviews.

Peer review is indispensable for two reasons:

(1) Peer review causes articles to be corrected and revised,
interactively, as a *precondition* of publication.


Indeed, and if a group of repositories carrying the collective good names of
their institutions were to offer peer review as a pre-condition for being
published in the repositories, the process would be exactly the same, except
that this group of repositories would behave like a collection of journals


In other words, peer review is neither just an accept/reject tag nor
just a post-hoc grade or mark such as A, B, C, D. It is the result of
an adjudication by experts to whose recommendations the author is
answerable as a condition of being published. (What does resemble
A/B/C/D is the journal hierarchy, where journal names and
track-records attest to their quality standards. In other words, there
are A/B/C/D journals, according to their quality standards. Users know
this and weight articles accordingly.)


Let us not mix up everything. My ABCD scheme was suggested as another evaluation
focusing on quality levels rather than the weird ranking schemes stemming out of
the misuse of impact factors.


(2) Because it is an interactive precondition for publication, peer
review provides a reliable quality filter for users (or at least a
filter as reliable as the quality of the peer-reviewed literature
today, such as it is).


Yes.


When meeting a journal's known peer review standards is a
*precondition* for publication, users are not confronted with the need
to make do with raw, unfiltered papers. Only editors and referees have
to read unfiltered submissions.


No one wants this unfiltered reading of whatever, least me.


But the most important point to note is that peer review is active and
answerable. Qualified but overworked peers do their duty to referee
-- reluctantly, and selectively, depending both on the reputation and
quality standards of the editor and journal inviting them to do so and
on the relevance and interest of the submitted paper. They do so,
confident that the author is answerable to the editor for acting upon
those of their recommendations the editor judges to be appropriate.


All that is true here about the way journals behave could be true of a group of
repositories acting as a publishing site.


It is extremely unlikely that unfiltered "publications" will find
their qualified referees, bidden or unbidden, ready to devote their
scarce time to reading and tagging them with a "grade," even though
the articles are already "published," hence not answerable to the
referee for corrections or revisions. And in any case, that's all too
late and uncertain for the would-be user.


It depends. There are many reasons behind the desire to evaluate.
The "grade" is indeed on journals that will not change, but, incorporated in the
metadata, it provides an extra-layer of filtering.
If the evaluation is done for publication in a consortium of prestigious
repositories, then it works just like journal peer-reviews.

The too-late is not necessary either. The life-cycles of articles varies a great
deal from discipline to discipline. The grades (i.e. through secondary
evaluation) might be too slow for some very fast-reac ting disciplines, but they
could easily work for the wholle of SSH, and many disci9plines such as
astronomy, mathematics, geology, meteorology, etc.


So, yes, it is indeed peer review and quality standards that are at
issue when one speculates about replacing the current peer review
system -- an interactive, answerable precondition for publications --
with an alternative post-hoc vetting and tagging system that has not
even been tested for whether it could deliver a research literature of
at least the quality and usability of the existing one.


I am not talking about replacing the peer review process. I am talking about
either complementing it with another system, or re-aiming the peer review
process on publishing processes that rely on the repositories rather than the
journals.


[snip]


> Ranking amounts to having as many levels as there are entities being
> ranked. Levels, on the other hand, lump numbers of entities into the
> same category. Ranking favours only individualized competition; by
> contrast, levels stress thresholds of quality and do not try to identify
> the very best. Good systems, such as schools, for example, use both
> systems, and do not try to make just one approach carry the whole
> evaluation task. The granularity of grades leads to many students being
> lumped together.

You are absolutely right. With peer review, all papers published by a
given journal share that journal's grade. Postpublication ranking of
the already peer-reviewed and graded articles would be an excellent
*supplement* to this system, but it is incoherent to imagine it as a
*substitute*.


Except that the journal ranking is applied to a set of articles with varying
quality levels. A journal such as Nature has only a small fraction of articles
whose impact lies above its impact factor, but then it lies at very high levels,
thus dragging the average up. My suggestion is that the quality assessment
should be directed at article, not at journals.


[snip].

Guédon, Jean-Claude (2004) The “Green” and “Gold” Roads to Open
Access: The Case for Mixing and Matching. Serials Review 30(4,):
315-28  doi:10.1016/j.serrev.2004.09.005

Harnad, S. (2005) Fast-Forward on the Green Road to Open Access: The
Case Against Mixing Up Green and Gold. Ariadne 42.
http://eprints.ecs.soton.ac.uk/10675/

> The granularity of the impact factor is designed to rank and only rank.
> Why this is so is not entirely clear to me, but some people do seem to
> see advantages in creating a generalized atmosphere of intense
> competition. The justification may well be to extract the best out of
> everyone, but it should be considered that it also leads to cheating and
> sloppy work.

I think everyone agrees that neither journal impact factors (average
citation counts) nor individual article or author citation counts (or
download counts) are sufficient as metrics of quality, importance or
influence. OA will make it possible to provide and collect many more
metrics, including post-publication tagging and ranking. (User tagging
of unrefereed preprints is also welcome.)

But again, these are supplements, not substitutes for peer review. Nor
is there any need for resorting to untested substitutes for peer
review (or waiting for them to be tested) in order to have 100% OA,
today:

All that's needed for 100% OA today is for institutions and funders to
mandate the self-archiving of the refereed, revised, accepted final
drafts of all peer-reviewed journal articles, immediately upon
acceptance for publication. http://bit.ly/eos-policy

This is the simple essence of OA today that is still being
systematically missed by most researchers today.

> if I did not mention peer review, it was not because I dismissed
> it, but because, on the contrary, I took it for granted. My take on peer
> review is that it deals in some ways with quality, but not exclusively.
> Other dimensions such as relevance and timeliness of topics are also
> involved in the process. For this reason, I tend to describe passing the
> peer review process as being akin to being admitted across a border. In
> this case, the border is that of the scientific territory. To that
> extent, peer review is indispensable. Beyond that, I would not want to
> fall into a fetishistic mode.

But the topic of the thread on which you intervened (Richard Poynder's
article) was the conjecture that for Open Access, peer review could be
replaced by postpublication vetting of some sort.

> I also feel that peer review is not necessarily tied to journals. Peer
> review can be exercised by various institutionalized bodies that simply
> want to carry out some form of evaluation based on competence and
> knowledge, not on authority.

This returns squarely to what draft it is that authors need to make
OA: The refereed, revised, accepted final draft, which anyone and
everyone can then evaluate as they wish, by way of supplementing peer
review -- or just the unrefereed draft, with the various evaluations
being based on that.

Whatever "body" does the peer review, and certifies the outcome, is,
in all relevant respects, a "journal." What it's called is not
important; what's important is that it really provides answerable,
interactive peer review, as a precondition for certification at the
body's known quality level -- known to the usership from the "body's"
established track record for quality.

> Finally, journals are not necessarily the only sites suited to
> evaluating the quality of scientific work. Up to now, they have been
> doing most of this work, but the advent of repositories exposing
> scientific work to the world will call for the practise of peer reviews
> in these new kinds of sites. "Exposing to the world" is the first
> meaning of publishing.

An institutional (or central) repository is merely an
(open-)access-provider. It is not a peer-reviewer or publisher.
Moreover, institutions overseeing the evaluation of their own output
are likely to be seen as (and to become) vanity-presses, for obvious
reasons of conflict of interest. Peer review needs to be done by a
neutral 3rd party, not the author or the author's institution.

But whatever the "body" is, if it is doing answerable peer review, and
certifying the outcome with its name and reputation, we already have a
perfectly adequate name for that sort of entity and service: it is
called a journal...

Stevan Harnad




Reply via email to