Re: Another Poynder Eye-Opener on Open Access

2011-05-16 Thread Jean-Claude Guédon
Le samedi 14 mai 2011 à 20:17 -0400, Stevan Harnad a écrit :

On Sat, May 14, 2011 at 11:39 AM, Jean-Claude Guédon
jean.claude.gue...@umontreal.ca wrote:

 I am not talking about replacing the peer review process. I am talking about
 either complementing it with another system, or re-aiming the peer review
 process on publishing processes that rely on the repositories rather than
 the journals.

Complementing peer review is fine, but the complement that's really
urgent (and already long overdue) is OA.


Indeed, and to get OA, you need some incentives. Creating complementary and
alternative forms of value around repositories (including OA journals) will
help. Just as getting mandates is helping.


Getting a consortium of repositories to take over peer review means
getting them to take over journal publishing. Good luck. 


Again, let us not confuse everything. If repositories complement the evaluation
of already published articles, but with different emphases, and different
objectives (quality rather than competition-based excellence, for example), this
is not publishing, at least not in the traditional sense of the word.

If repositories begin to accept articles whose peer review they organize
themselves, then, indeed, it is publishing. Obviously, a credible peer review
has to rest on more than one institution. This is the reason behind recommending
the formation of repository networks, preferably across national boundaries.

I would see the second hypothesis gradually evolving out of the first one.

(But why? So
far we haven't been very successful yet at getting most authors to
provide OA to their published journal articles either by depositing
them in their repositories or by submitting them to OA journals...)


The lack of success may well be due to the fact that the present modes of
evaluation, especially when used in the context of tenure and promotion
processes, do not seem generally to lead to the conclusion that OA is really
useful to one's career.

Like Stevan, I truly believe there is an OA advantage measured by impact, but,
alas, this conclusion has not penetrated the collective consciousness of
scientists.

My conclusion: let us work all together on all possible and credible hypotheses
that can help OA, including, of course, the quest for mandates.

Jean-Claude Guédon


Stevan Harnad






Re: Another Poynder Eye-Opener on Open Access

2011-05-14 Thread Stevan Harnad
[Forwarded from Jean-Claude Guédon: Direct posting had arrived encrypted.]

Le mercredi 11 mai 2011 à 23:40 -0400, Stevan Harnad a écrit :
On 2011-05-11, at 8:35 PM, jean.claude.gue...@umontreal.ca wrote:

  SH:
  to deposit everything as unrefereed preprints in an IR
  [instead of submitting to a journal for peer review] and
  then wait for the better stuff to be picked up by an overlay
  journal. (I actually think that's utter nonsense.)
 
  JCG:
  If overlay journals (or any equivalent scheme) were to be as
  passive as Stevan describes, I would fully agree with him,
  However, it is not ridiculous to imagine consortia of
  repositories forming to promote their content, and, on top of
  that, establish a new layer of active judgement that would create
  new forms of value for these articles. The tyranny of citation
  impacts and the misuse of citation impacts must be, to say the
  least, diluted to bring back some sanity to the evaluation
  procedures presently in force in various scientific communities.

 SH:
 More metrics are always welcome, but in addition to -- not in place
 of -- peer review.

I said nothing about peer review, and I would also agree that peer
review is indispensable. The new form of judgement that I allude to
would be a form of peer review, but probably closer to jury review than
to individual, isolated reviews.

 SH:
 (1) How do levels differ from ranks?

Ranking amounts to having as many levels as there are entities being
ranked. Levels, on the other hand, lump numbers of entities into the
same category. Ranking favours only individualized competition; by
contrast, levels stress thresholds of quality and do not try to identify
the very best. Good systems, such as schools, for example, use both
systems, and do not try to make just one approach carry the whole
evaluation task. The granularity of grades leads to many students being
lumped together.

The granularity of the impact factor is designed to rank and only rank.
Why this is so is not entirely clear to me, but some people do seem to
see advantages in creating a generalized atmosphere of intense
competition. The justification may well be to extract the best out of
everyone, but it should be considered that it also leads to cheating and
sloppy work.

No competition leads to stagnation. Balancing between  quality and
excellence (the latter being a product of competition) is important to
optimize the human quest for knowledge. Introducing levels is my way
to remind all of us that we need to seek that balance. At the very
least, we must not confuse quality with excellence.

 SH:
 (2) And post hoc means post hoc: prepublication means that papers need
to meet peer review standards in order to be accepted for publication
(and hence certified as having met the quality standards of the journal
that accepted it). This often means modification of the submitted draft,
not just a grade attached to it.


Again, if I did not mention peer review, it was not because I dismissed
it, but because, on the contrary, I took it for granted. My take on peer
review is that it deals in some ways with quality, but not exclusively.
Other dimensions such as relevance and timeliness of topics are also
involved in the process. For this reason, I tend to describe passing the
peer review process as being akin to being admitted across a border. In
this case, the border is that of the scientific territory. To that
extent, peer review is indispensable. Beyond that, I would not want to
fall into a fetishistic mode.

I also feel that peer review is not necessarily tied to journals. Peer
review can be exercised by various institutionalized bodies that simply
want to carry out some form of evaluation based on competence and
knowledge, not on authority.

Finally, journals are not necessarily the only sites suited to
evaluating the quality of scientific work. Up to now, they have been
doing most of this work, but the advent of repositories exposing
scientific work to the world will call for the practise of peer reviews
in these new kinds of sites. Exposing to the world is the first
meaning of publishing.

Jean-Claude Guédon


Re: Another Poynder Eye-Opener on Open Access

2011-05-14 Thread Stevan Harnad
 On 2011-05-11, at 8:35 PM, jean.claude.gue...@umontreal.ca wrote:

 I said nothing about peer review, and I would also agree that peer
 review is indispensable. The new form of judgement that I allude to
 would be a form of peer review, but probably closer to jury review than
 to individual, isolated reviews.

Peer review is indispensable for two reasons:

(1) Peer review causes articles to be corrected and revised,
interactively, as a *precondition* of publication.

In other words, peer review is neither just an accept/reject tag nor
just a post-hoc grade or mark such as A, B, C, D. It is the result of
an adjudication by experts to whose recommendations the author is
answerable as a condition of being published. (What does resemble
A/B/C/D is the journal hierarchy, where journal names and
track-records attest to their quality standards. In other words, there
are A/B/C/D journals, according to their quality standards. Users know
this and weight articles accordingly.)

(2) Because it is an interactive precondition for publication, peer
review provides a reliable quality filter for users (or at least a
filter as reliable as the quality of the peer-reviewed literature
today, such as it is).

When meeting a journal's known peer review standards is a
*precondition* for publication, users are not confronted with the need
to make do with raw, unfiltered papers. Only editors and referees have
to read unfiltered submissions.

But the most important point to note is that peer review is active and
answerable. Qualified but overworked peers do their duty to referee
-- reluctantly, and selectively, depending both on the reputation and
quality standards of the editor and journal inviting them to do so and
on the relevance and interest of the submitted paper. They do so,
confident that the author is answerable to the editor for acting upon
those of their recommendations the editor judges to be appropriate.

It is extremely unlikely that unfiltered publications will find
their qualified referees, bidden or unbidden, ready to devote their
scarce time to reading and tagging them with a grade, even though
the articles are already published, hence not answerable to the
referee for corrections or revisions. And in any case, that's all too
late and uncertain for the would-be user.

So, yes, it is indeed peer review and quality standards that are at
issue when one speculates about replacing the current peer review
system -- an interactive, answerable precondition for publications --
with an alternative post-hoc vetting and tagging system that has not
even been tested for whether it could deliver a research literature of
at least the quality and usability of the existing one.

And adopting such untested alternatives is certainly not the price
that needs to be paid for open access to the existing peer reviewed
research literature; for all that needs to be done there is to make
the peer-reviewed drafts OA immediately upon acceptance for
publication. No need to make only the raw unrefereed drafts OA and
then wait for pot luck!

 Ranking amounts to having as many levels as there are entities being
 ranked. Levels, on the other hand, lump numbers of entities into the
 same category. Ranking favours only individualized competition; by
 contrast, levels stress thresholds of quality and do not try to identify
 the very best. Good systems, such as schools, for example, use both
 systems, and do not try to make just one approach carry the whole
 evaluation task. The granularity of grades leads to many students being
 lumped together.

You are absolutely right. With peer review, all papers published by a
given journal share that journal's grade. Postpublication ranking of
the already peer-reviewed and graded articles would be an excellent
*supplement* to this system, but it is incoherent to imagine it as a
*substitute*.

What needs to be made OA is the refereed, accepted draft, not just the
unrefereed preprint. There is a world of difference between these two.

Guédon, Jean-Claude (2004) The “Green” and “Gold” Roads to Open
Access: The Case for Mixing and Matching. Serials Review 30(4,):
315-28  doi:10.1016/j.serrev.2004.09.005

Harnad, S. (2005) Fast-Forward on the Green Road to Open Access: The
Case Against Mixing Up Green and Gold. Ariadne 42.
http://eprints.ecs.soton.ac.uk/10675/

 The granularity of the impact factor is designed to rank and only rank.
 Why this is so is not entirely clear to me, but some people do seem to
 see advantages in creating a generalized atmosphere of intense
 competition. The justification may well be to extract the best out of
 everyone, but it should be considered that it also leads to cheating and
 sloppy work.

I think everyone agrees that neither journal impact factors (average
citation counts) nor individual article or author citation counts (or
download counts) are sufficient as metrics of quality, importance or
influence. OA will make it possible to provide and collect many more
metrics, 

Re: Another Poynder Eye-Opener on Open Access

2011-05-14 Thread Jean-Claude Guédon
Le samedi 14 mai 2011 à 09:26 -0400, Stevan Harnad a écrit :

 On 2011-05-11, at 8:35 PM, jean.claude.gue...@umontreal.ca wrote:

 I said nothing about peer review, and I would also agree that peer
 review is indispensable. The new form of judgement that I allude to
 would be a form of peer review, but probably closer to jury review than
 to individual, isolated reviews.

Peer review is indispensable for two reasons:

(1) Peer review causes articles to be corrected and revised,
interactively, as a *precondition* of publication.


Indeed, and if a group of repositories carrying the collective good names of
their institutions were to offer peer review as a pre-condition for being
published in the repositories, the process would be exactly the same, except
that this group of repositories would behave like a collection of journals


In other words, peer review is neither just an accept/reject tag nor
just a post-hoc grade or mark such as A, B, C, D. It is the result of
an adjudication by experts to whose recommendations the author is
answerable as a condition of being published. (What does resemble
A/B/C/D is the journal hierarchy, where journal names and
track-records attest to their quality standards. In other words, there
are A/B/C/D journals, according to their quality standards. Users know
this and weight articles accordingly.)


Let us not mix up everything. My ABCD scheme was suggested as another evaluation
focusing on quality levels rather than the weird ranking schemes stemming out of
the misuse of impact factors.


(2) Because it is an interactive precondition for publication, peer
review provides a reliable quality filter for users (or at least a
filter as reliable as the quality of the peer-reviewed literature
today, such as it is).


Yes.


When meeting a journal's known peer review standards is a
*precondition* for publication, users are not confronted with the need
to make do with raw, unfiltered papers. Only editors and referees have
to read unfiltered submissions.


No one wants this unfiltered reading of whatever, least me.


But the most important point to note is that peer review is active and
answerable. Qualified but overworked peers do their duty to referee
-- reluctantly, and selectively, depending both on the reputation and
quality standards of the editor and journal inviting them to do so and
on the relevance and interest of the submitted paper. They do so,
confident that the author is answerable to the editor for acting upon
those of their recommendations the editor judges to be appropriate.


All that is true here about the way journals behave could be true of a group of
repositories acting as a publishing site.


It is extremely unlikely that unfiltered publications will find
their qualified referees, bidden or unbidden, ready to devote their
scarce time to reading and tagging them with a grade, even though
the articles are already published, hence not answerable to the
referee for corrections or revisions. And in any case, that's all too
late and uncertain for the would-be user.


It depends. There are many reasons behind the desire to evaluate.
The grade is indeed on journals that will not change, but, incorporated in the
metadata, it provides an extra-layer of filtering.
If the evaluation is done for publication in a consortium of prestigious
repositories, then it works just like journal peer-reviews.

The too-late is not necessary either. The life-cycles of articles varies a great
deal from discipline to discipline. The grades (i.e. through secondary
evaluation) might be too slow for some very fast-reac ting disciplines, but they
could easily work for the wholle of SSH, and many disci9plines such as
astronomy, mathematics, geology, meteorology, etc.


So, yes, it is indeed peer review and quality standards that are at
issue when one speculates about replacing the current peer review
system -- an interactive, answerable precondition for publications --
with an alternative post-hoc vetting and tagging system that has not
even been tested for whether it could deliver a research literature of
at least the quality and usability of the existing one.


I am not talking about replacing the peer review process. I am talking about
either complementing it with another system, or re-aiming the peer review
process on publishing processes that rely on the repositories rather than the
journals.


[snip]


 Ranking amounts to having as many levels as there are entities being
 ranked. Levels, on the other hand, lump numbers of entities into the
 same category. Ranking favours only individualized competition; by
 contrast, levels stress thresholds of quality and do not try to identify
 the very best. Good systems, such as schools, for example, use both
 systems, and do not try to make just one approach carry the whole
 evaluation task. The granularity of grades leads to many students being
 lumped together.

You are absolutely right. With peer review, all papers published by a
given journal 

Re: Another Poynder Eye-Opener on Open Access

2011-05-14 Thread Stevan Harnad
On Sat, May 14, 2011 at 11:39 AM, Jean-Claude Guédon
jean.claude.gue...@umontreal.ca wrote:

 I am not talking about replacing the peer review process. I am talking about
 either complementing it with another system, or re-aiming the peer review
 process on publishing processes that rely on the repositories rather than
 the journals.

Complementing peer review is fine, but the complement that's really
urgent (and already long overdue) is OA.

Getting a consortium of repositories to take over peer review means
getting them to take over journal publishing. Good luck. (But why? So
far we haven't been very successful yet at getting most authors to
provide OA to their published journal articles either by depositing
them in their repositories or by submitting them to OA journals...)

Stevan Harnad



Re: Another Poynder Eye-Opener on Open Access

2011-05-12 Thread Stevan Harnad
On 2011-05-11, at 8:35 PM, jean.claude.gue...@umontreal.ca wrote:

 SH:
 to deposit everything as unrefereed preprints in an IR
 [instead of submitting to a journal for peer review] and
 then wait for the better stuff to be picked up by an overlay
 journal. (I actually think that's utter nonsense.)
 
 JCG:
 If overlay journals (or any equivalent scheme) were to be as 
 passive as Stevan describes, I would fully agree with him, 
 However, it is not ridiculous to imagine consortia of 
 repositories forming to promote their content, and, on top of 
 that, establish a new layer of active judgement that would create 
 new forms of value for these articles. The tyranny of citation 
 impacts and the misuse of citation impacts must be, to say the 
 least, diluted to bring back some sanity to the evaluation 
 procedures presently in force in various scientific communities.

More metrics are always welcome, but in addition to -- not in place
of -- peer review.

 SH: The frequently mooted notion... of postpublication peer review 
 ... is like a kind of evolutionarily unstable strategy that
 could be dipped into experimentally to test what scholarly
 quality, sustainability, and scaleability it would yield -- until
 (as I would predict) the consequences become evident enough to
 induce everyone to draw back.
 
 JCG:
 What I just wrote above may partially correspond to this 
 postpublication peer review; but then it may not. In any case, 
 I would see this effort as one aiming at buiding well-defined 
 quality levels, rather than ranking systems.

(1) How do levels differ from ranks?

(2) And post hoc means post hoc: prepublication means that papers need to meet 
peer review standards in order to be accepted for publication (and hence 
certified as having met the quality standards of the journal that accepted it). 
This often means modification of the submitted draft, not just a grade 
attached to it.

 JCG:
 In short, beside 
 ranking everybody in ways that are sometimes difficult to justify 
 (three decimals with impact factor measurements, for example...), 
 it might be interesting to provide A, B, C, D grades to articles 
 after publication.

(a) If it hasn't met quality standards before publication, it's not publication 
but vanity-press self-publication.

(b) Competent referees hardly have the time to review what reputable editors of 
reputable journals ask them to review:  Why would they do it voluntarily, or 
randomly, for unfiltered self-publications that are not even answerable to 
their recommendations? Cloud-tagging?

 JCG:
 Who would do that? Juries established by the 
 consortia of repositories I mentioned earlier. Why would they do 
 that? To promote their content and make it more useful 
 (especially if the metadata included an extension incorporating 
 these grades).

The juries are the referees, today. And they referee unpublished material 
that is answerable to their judgements, just are their judgments are answerable 
to the editor's judgments. The resultant published quality is then the 
journal's quality standard, for which it is in turn answerable.

Consortia promoting their own content? Sounds like vanity-press-squared!

 SH:
 Richard replied that the reason he did not dwell on Green OA,
 which he too favors, is that he thinks Green OA progress is still
 too slow (I agree!) and that it's important to point out that the
 fault in the system is at the publisher end -- whether non-OA
 publisher or OA. I continue to think the fault is at the
 researcher end, and will be remedied by Green OA self-archiving
 by researchers, and Green OA self-archiving mandates by research
 institutions and funders
 
 JCG:
 If publishers did not constantly muddy the waters and create all 
 kinds of variations on what one can self-archive (for example no 
 publisher pdf)

PDF is irrelevant. The only relevant policy statement desired from the 
publisher is an endorsement of OA self-archiving of the refereed draft 
immediately upon acceptance for publication (and even that isn't necessary, 
just helpful, as the refereed draft can and should be self-archived whether or 
not it is made immediately OA):

Sale, A., Couture, M., Rodrigues, E., Carr, L. and Harnad, S. (2010) Open 
Access Mandates and the Fair Dealing Button. In: Dynamic Fair Dealing: 
Creating Canadian Culture Online (Rosemary J. Coombe  Darren Wershler, Eds.) 
http://eprints.ecs.soton.ac.uk/18511/

 researchers would not feel that there is too much 
 complexity and uncertainty in self-archiving.
 So, yes, researchers are ultimately responsible, but they cannot be held 
 completely responsible if the rules are made very complex and 
 subtle.

The remedy for researcher passivity is deposit mandates from their institutions 
and funders (preferably making the deposit the mechanism for submitting 
publications for annual performance review).

  A third stake holder deserves being mentioned in this 
 context: research administrators. If their evaluation procedures 

Another Poynder Eye-Opener on Open Access

2011-03-14 Thread Stevan Harnad
Poynder, Richard (2011) PLoS ONE, Open Access, and the Future of
Scholarly Publishing. Open and Shut. 7 March 2011.
http://poynder.blogspot.com/2011/03/plos-one-open-access-and-future-of.html
ABSTRACT: Open Access (OA) advocates argue that PLoS ONE is now the
largest scholarly journal in the world. Its parent organisation —
Public Library of Science (PLoS) — was co-founded in 2001 by Nobel
Laureate Harold Varmus. What does the history of PLoS tell us about
the development of PLoS ONE? What does the success of PLoS ONE tell us
about OA? And what does the current rush by other publishers to clone
PLoS ONE tell us about the future of scholarly communication?

Comment:

Richard Poynder has written another timely and important eye-opener
about Open Access. Although (as usual!) I disagree with some of the
points Richard makes in his paper, I think it is again a welcome
cautionary piece from this astute observer and chronicler of OA
developments across the years.

(1) Richard is probably right that PLOS ONE is over-charging and
under-reviewing (and over-hyping).

(2) It is not at all clear, however, that the solution is to deposit
everything instead as unrefereed preprints in an IR and then wait for
the better stuff to be picked up by an overlay journal. (I actually
think that's utter nonsense.)

(3) The frequently mooted notion (of Richard Smith and many others) of
postpublication peer review is not much better, but it is like a
kind of evolutionarily unstable strategy that could be dipped into
experimentally to test what scholarly quality, sustainability, and
scaleability it would yield -- until (as I would predict) the
consequences become evident enough to induce everyone to draw back.

(4) Although there is no doubt that Harold Varmus's stature and
advocacy have had an enormous positive influence on the growth of OA,
in my opinion Richard's is attributing far too much prescience to
Harold's original 1999 E-biomed proposal. [See my 1999 criticisms.
Although I was still foolishly flirting with central deposit at the
time (and had not yet realized that mandates would be required to get
authors to deposit at all), I think I picked out the points that
eventually led to incoherence; and, no, PLOS was not on the horizon at
that time (even BMC didn't exist).]

(5) Also, of course, I think Richard gives the Scholarly Scullery way
too much weight (though Richard does rightly state that he has no
illusions about those chefs' motivation -- just as he stresses that he
has no doubts about PLOS's sincerity).

(6) Richard's article may do a little short-term harm to OA, but not a
lot. It is more likely to do some good.

(7) I wish, of course, that Richard had mentioned the alternative that
I think is the optimal one (and that I think will still prevail),
namely, that self-archiving the refereed final draft of all journal
articles (green OA) will be mandated by all universities and funders,
eventually causing subscription cancellations, driving down costs to
just those of peer review, and forcing journals to convert to
institutional payment for individual outgoing paper publication
instead of for incoming bulk subscription. The protection against the
temptation to dumb down peer review to make more money is also
simple and obvious: no-fault refereeing charges.

(8) Richard replied that the reason he did not dwell on Green OA,
which he too favors, is that he thinks Green OA progress is still too
slow (I agree!) and that it's important to point out that the fault in
the system is at the publisher end -- whether non-OA publisher or OA.
I continue to think the fault is at the researcher end, and will be
remedied by Green OA self-archiving by researchers, and Green OA
self-archiving mandates by research institutions and funders

Harnad, S. (2010) No-Fault Peer Review Charges: The Price of
Selectivity Need Not Be Access Denied or Delayed. D-Lib Magazine 16
(7/8).

Harnad, S. (2009) The PostGutenberg Open Access Journal. In: Cope, B.
 Phillips, A (Eds.) The Future of the Academic Journal. Chandos.


Stevan Harnad
American Scientist Open Access Forum
EnablingOpenScholarship