Which Journals Reach Researchers, Universities and Funders?

2004-12-15 Thread Jim Till

On Tue, 14 Dec 2004, Stevan Harnad wrote [as part of long message]:


Jim Till asks this question, but about Green:


Jim Till wrote: Two questions: 1) Which are the top three
journals in which to publish articles about OA? 2) Of
these, which ones are of a hue of green such that they
permit self-archiving of the final peer-reviewed,
accepted and edited version of the article? Jim Till
University of Toronto


He needs to check whether there is a suitable Green
journal among the 92% in the Romeo Directory of journal
self-archiving policies: http://romeo.eprints.org/


Please note that my first question was: 1) Which are the top
three journals in which to publish articles *about* OA?

For example, the Canadian Breast Cancer Research Alliance
(CBCRA, a major funder in Canada of research on breast
cancer) has just begun to consider the feasibility and
desirability of setting up a CBCRA OA archive, with an
initial focus on the self-archiving of peer-reviewed
research reports published by its own grantees.

If I chose to prepare an article based on experience with
this particular planning process (which might be of interest
to other funding agencies and foundations), what would be
your advice about the best (green) journals in which to
publish such an article?

Jim Till
University of Toronto

   [Moderator's Note: Good question. Publishing in library or
   publishing journals like Learned Publishing (??) or Serials Review
   (Green) is either preaching to the converted or reaching the wrong
   constituency -- since it is only authors who can self-archive
   and only their universities and research funders who can adopt
   self-archiving policies. So it really is an important question how
   to reach this constituency, across all disciplines of science and
   scholarship. Nature (Green) and Science (Gray) are possibilities
   (their non-refereed sections) if your news is important enough.
   The right venue may not be a peer-reviewed journal at all, but a
   wide-spectrum magazine such as Chronicle or Higher Education of
   Times Higher Education Supplement.  -- SH]


Re: The Library of Alexandria Non-Problem

2004-09-11 Thread Jim Till

On Sat, 11 Sep 2004, Eberhard R. Hilf wrote [in part]:


eh In addition, commercial publishers do aim at the
eh present time to earn money and do not care about the
eh future, when they might no longer exist. Some of the
eh e-versions of my papers with Wiley are gone after less
eh than ten years, because the Publisher bought the
eh (indirect daugther) Physikalische Blaetter, but without
eh the e-archive.
eh
eh Of course these published e-Documents of mine are still
eh in our Institutional OA archive. Institutional
eh self-archiving, together with agreements on mirrors,
eh and retrieval, is a safe proposition for long-term
eh archiving.


The paper version (if there is one) is also a safe
proposition for long-term archiving. However, if there's no
paper version, then a stable institutional or central
archive becomes crucial. And, if the only version that
survives is a paper version, then (in my limited
experience), it becomes quite difficult for an amateur to
prepare a version (such as a PDF version) that's an exact
copy of the original version, unless the copied version is
prepared entirely as a set of images. My understanding is
that, for text embedded entirely in images, full-text
searching becomes a problem (e.g. for documents in
DSpace-based archives, such as the one recently created for
my home department). See:
https://tspace.library.utoronto.ca/handle/1807/2324

Much better to be able to archive a good-quality electronic
version of the kind that publishers prepare. But, this
solution isn't available from some publishers. Nor for many
older articles, unless the publisher has already prepared
electronic backfiles. Obviously, if the publisher has
already gone out of business, such backfiles are likely to
be missing.

If there's a reasonable chance that a paper might become a
classic, then a copy of it clearly should be archived, to
meet the needs of scholars in the future (such as
historians), as well as those in the present. Fortunately,
the short-term goals and the longer-term goals require no
difference in behavior. One simply self-archives a
good-quality version of the article, in a stable archive.

Jim Till
University of Toronto


Re: Mandating OA around the corner?

2004-08-16 Thread Jim Till

On Fri, 13 Aug 2004, Stevan Harnad wrote [in part]:


Excerpt from Peter Suber's Open Access News
http://www.earlham.edu/~peters/fos/2004_08_08_fosblogarchive.html#a109240384557714980

The Canadian Association of Research Libraries (CARL)
http://www.carl-abrc.ca/ has written a Brief to the Social
Sciences and Humanities Research Council of Canada, June
29, 2004.
http://www.carl-abrc.ca/projects/sshrc/transformation-brief.pdf

The brief recommends ways in which Canada's Social Science
and Humanities Research Council (SSHRC)
http://www.sshrc.ca/ might transform itself, especially to
promote new and more effective forms of scholarly
communication.


Two of the recommendations in the brief from CARL to SSHRC
are:

* Investigate the feasibility for recipients of SSHRC
research grants to publish in open access journals
and/or deposit research articles in institutional
repositories (p. 5).

* Encourage the use of the new institutional repositories
infrastructures being built at Canadian research libraries
to house Canadian research output (p. 6).

The authors of the brief to SSHRC did _not_ include a
recommendation that Canadian granting agencies should set up
their own electronic repositories/archives.

Why not? After all, as long as all of the repositories in a
distributed network of repositories are interoperable, it
doesn't matter whether or not they have been set up by
institutions, or by granting agencies, or by other more
discipline-oriented entities. And, if granting agencies set
up their own (interoperable) repositories, they wouldn't
need to wait for individual universities to do so.

However, perhaps it should be noted that the source of the
brief is the Canadian Association of Research Libraries, and
that research libraries are mainly based at universities.
So, perhaps the authors had an understandable bias in favour
of electronic repositories based at their own institutions
(the Canadian universities), rather than ones set up and
maintained by (one or more) Canadian granting agencies.

Of course, it doesn't matter to search engines (including
already-popular ones, such as Google) where the repositories
are based, as long as they are openly accessible, and
provide metadata that demonstrate that the sponsors of the
archive have (because of their participation in an
appropriate interoperable network) credibility.

Jim Till
University of Toronto


Re: Mandating OA around the corner?

2004-07-23 Thread Jim Till
On Fri, 23 Jul 2004, David Prosser wrote [in part]:

 I'm not at all convinced by the 'spending money on
 dissemination impedes the discovery of a cure for cancer'
 argument.  Spending money on making sure that data are
 easily available has accelerated the pace of scientific
 discovery (most famously in genome research) and there is
 no reason to think that this will not be the same for
 papers.

I agree that funds spent on effective knowledge transfer
are well-spent. There have been many debates in the cancer
research field (and in other areas of health research) about
how best to foster translational research (e.g. the
transfer of basic knowledge into policy and practice).
Surely attempts to foster the dissemination of primary
research results provide one very credible way to facilitate
knowledge transfer in general, and health-related
translational research in particular?

I'd use a similar argument to counter concerns about the
free rider question about OA, whereby big business gains
free access to research for their commercial advantage.
See: MPs brand scientific publishing unsatisfactory, by
Bobby Pickering, Information World Review, 20 July 2004,
http://www.iwr.co.uk/IWR/1156758

Jim Till
University of Toronto


Re: Mandating OA around the corner?

2004-07-10 Thread Jim Till
I agree with Simeon Warner that funding agencies should
seriously consider a requirement that all publications
resulting from research supported by such agencies must be
deposited in open-access repositories.

This raises a question that hasn't been discussed recently
by members of this Forum: Is there any funding agency, other
than, I believe, the Danish Research Centre for Organic
Farming (DARCOF), via it's Organic Eprints archive (see:
http://orgprints.org/ ), that has done both of these:
a) mandates open access to the results of research funded
by that agency, and, b) has established it's own
knowledge-transfer-oriented eprints archive?

For example, such an archive could be used (instead of, or
in addition to) the grantees' own preferred institutional
(or, discipline-based) open-access repository.

A reminder: Peter Suber has developed a draft version of an
open-access policy for foundation research grants, and has
discussed some of the issues that need to be considered:

Model Open-Access Policy for Foundation Research Grants
Draft 8.  March 7, 2004.

http://www.earlham.edu/~peters/fos/foundations.htm

An example of one of the issues that Peter has considered
(see: Term 10. When the open-access condition is violated):

If compelling recipients to repay the grant is too strong,
and compelling late open-access dissemination is too weak,
then foundations might consider some intermediate options.
For example, the foundation could reserve some additional
incentive funds to be released only when the recipient has
provided open access to works based on previous funds. Or
the foundation could simply make non-complying recipients
ineligible for future grants.

Jim Till
University of Toronto


Re: How to compare research impact of toll- vs. open-access research

2004-04-26 Thread Jim Till
On Fri Jan 09 2004, I posted a message about self-archiving
a postprint of mine in the CogPrints archive, see:
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/3384.html

It's a postprint of an invited, peer-reviewed, edited
commentary, about cancer-related electronic support groups.
I've retained copyright.

On April 21 (2004), a version of the postprint became
openly-accessible in the CogPrints archive, via:
http://cogprints.ecs.soton.ac.uk/archive/3566/

By April 24, it had already been harvested by Google. Today,
(April 25) when I did a Google search using the keywords
Electronic support groups (including the quotation marks),
I was surprised to find that the version in the CogPrints
archive was ranked within the top 10 (at #5) in the entire
ranked list of links that was obtained.

A link to an online abstract page of the published version
(in a subscription-based journal, Journal of Cancer
Integrative Medicine 2004; 2: 21-24) is at:
http://www.pnpco.com/pn14007.html (this link was ranked #40
in the list of links obtained via the same Google search).

Interesting?

Jim Till
University of Toronto


Re: How to compare research impact of toll- vs. open-access research

2004-04-26 Thread Jim Till
Comments on David's message are inserted below. (BTW, the subject heading
for my previous message, to which David responded, was chosen by Stevan,
not by me). --Jim

On Mon, 26 Apr 2004, David Goodman wrote:

 For one thing, listing in Cogprints is more of a recommendation to a
 searcher with interdsciplinary interests than publication in this very
 obscure journal, which isn't even in PubMed. For another, people who
 have used it know that Cogprints leads to full text, unlike the journal's
 abstracts-only link.

Yes, David, the JCIM is a still-obscure new journal, in a rather new (and,
still quite controversial?) field. But, I'm one of those odd folks who
believe that it's the perceived merit of the *article* itself that should
(if possible) be evaluated, as directly as possible, rather than via more
indirect proxy-indicators (such as the perceived merit of the *journal*
in which the article is published).

I responded positively to an invitation from JCIM to provide a commentary,
mainly in order to take advantage of the opportunity that it provided
to do a small experiment on self-archiving. The editorial board for
this journal is, I think, a credible one, and my topic (cancer-related
electronic support groups) was a suitable one for this particular journal.

BTW, I'd already published, about a year earlier, a previous invited
commentary on this same topic, in another new journal, Health and
Quality of Life Outcomes (HQLO), one of the set of journals published
by BioMed Central. There was no need to self-archive my previous
commentary, because the published version is openly accessible, via:
http://www.hql.com/content/1/1/16

An initial step in my little experiment with JCIM was to test whether
or not the editors and the publisher would permit me to retain copyright
(and, the right to self-archive a postprint). After a brief exchange of
emails, this negotiation was successful. A later step in this same small
experiment was to self-archive versions of the same postprint at three
different locations, one of which was the CogPrints archive. The URLs
for the other two alternative locations are available via the CogPrints
location, see: http://cogprints.ecs.soton.ac.uk/archive/3566/

Another step in my small experiment is to compare the evolution of the
Google page ranks for the three self-archived versions (they differ
in some very minor ways; another somewhat controversial topic?). What
I *didn't* expect was a high early ranking for *any* of the three
self-archived versions.

Could it have been simply the number of visits to the site that led to
a relatively high Google page rank for the CogPrints location, within
a few days after the postprint was posted there? My test search was
for electronic support groups (note that I *didn't* include the word
cancer among the keywords that I used in my test search). At present,
I have no credible explanation for the almost-immediate high rank that
the Google page ranking algorithm gave to the CogPrints version.

 The first question I have, is what is the justification for this
 journal existing in the first place?  The second, Jim, is why, as
 a senior scientist not seeking promotion or tenure, did you bother
 publishing it there?  Did you think that the imprimature of this unknown
 journal's peer review would add anything to your name in this field as
 an indicator of quality?  The principle reason I can see would be the
 desire to add its regular readership, however small, to those who would
 see your paper. It can't be just to get the paper indexed in Medline,
 because Medline doesn't yet include the journal.

David, I've tried answer to your question (please see above) by sketching
out the design of my little experiment on the self-archiving of a
postprint, at different locations.

 Could we concentrate better on the need for open access repositories if
 we did not waste effort on unnecessary journal publication?  Everyone will
 I hope understand that this is not primarily intended for Jim personally,
 but to authors in general.

My commentary in JCIM does contain some novel material (including some
comments on Internet research ethics), in comparison with the commentary
on the same subject that I published last year in HQLO (again, see
above). And, I expect that the JCIM will reach quite a different group
of readers than does HQLO. So, I (of course!) don't regard it as an
unnecessary publication. But, my primary goal was to undertake a small
experiment on self-archiving (*not* one on self-promotion). :-)

Jim Till
University of Toronto


Re: OAI compliant personal pages

2004-02-15 Thread Jim Till
On Sat, 14 Feb 2004, Peter Suber wrote:

[ps] Don't forget about Kepler,
[ps] http://kepler.cs.odu.edu:8080/kepler/index.html,
[ps] software for creating an archivelet or an
[ps] OAI-compliant archive for the research output of a
[ps] single person.  Kepler runs on Windows 2000/XP,
[rs] Linux, Solaris, and Mac OS X.

Peter, I've summarized my experience with the initial
version of a Kepler archivelet in a previous message posted
to the AmSci Forum [on the Subject: Re: Free Access vs.
Open Access, archived at the /~harnad/Hypermail/Amsci/ site
on Fri Jan 09 2004]:
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/3384.html

The stability of the server that supported the cached
version of my archivelet became an issue.

In that same message, I also asked for advice about
self-archiving a current eprint of mine (it's *not* about
electronic publishing, it's an invited commentary about
cancer-related electronic support groups). This eprint is
currently in preprint form. It's been peer-reviewed and
edited, and will be published soon in a TA journal (Journal
of Cancer Integrative Medicine 2004, vol. 2, no. 1).

I've signed a publication agreement with this journal (after
some negotiation) which permits me to retain copyright, and
to self-archive the peer-reviewed and edited version. This
final version (with some minor modifications, such as the
addition of a statement about copyright at the end of the
preprint) has already been self-archived, at least
temporarily, at a non-OAI-compliant site hosted by
tripod.com. See: Cancer-related electronic support groups
as navigation-aids: Overcoming geographic barriers:
http://ca916.tripod.com/index-9.html

The preprint hasn't, so far, been cached by Google, nor by
the Internet Archive/Wayback Machine at:
http://www.archive.org/ (but, I self-archived it only about
a week ago, on Feb. 7, 2004).

As I mentioned in an earlier message posted to this forum
[on the Subject: Re: Kepler: Author-Based Archivelets,
archived on Fri Jun 29 2001, at:
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/1397.html
], my main difficulty with the initial Kepler archivelets
was my lack of familiarity with proper use of the Dublin
Core of metadata, and especially, uncertainty about
appropriate use of such metadata for an OAI-compliant
archivelet.

I've tried to add some Dublin Core metadata to the preprint
that I've self-archived at the tripod.com site (see above).
Of course, I'd prefer to self-archive the final version
(together with a complete citation to the published version)
in a well-established, OAI-compliant, stable archive. As
mentioned in my previous message,
[http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/3384.html
], CogPrints is a possibility, even though the content of my
eprint isn't well-suited to CogPrints.

My experimentation with self-archiving will continue.

Jim Till
University of Toronto


OAI compliant personal pages

2004-02-10 Thread Jim Till
On Tue, 10 Feb 2004, Jean-Claude Guédon wrote [in part]:

[j-cg] 2. If we look at the growing number of open access
[j-cg] journals and the growing number of open access
[j-cg] repositories, including OAI compliant personal
[j-cg] pages, and if we look at OA harvesters, I would say
[j-cg] that movement is still a minority movement but that
[j-cg] it is growing well and  even fast.

I noted with interest Jean-Claude's comment about OAI
compliant personal pages. How can such pages be identified
as OAI compliant (and, how can their number be estimated)?

Jim Till
University of Toronto


Re: Free Access vs. Open Access

2004-01-09 Thread Jim Till
On Thu, 8 Jan 2004, Stevan Harnad wrote [in part]:

[sh] If you have the money to publish *one* article in
[sh] PLoS ($1500) you have more than enough money to set
[sh] up at least one eprint archive. (Kepler OAI
[sh] archivelets might be an even cheaper solution:
[sh] http://www.dlib.org/dlib/april01/maly/04maly.html ).

I did set up, in June 2001, the original version of a Kepler
archivelet. However, the original Kepler Search Service,
where my eprints were cached, is no longer supported by the
research group at Old Dominion University. So, the stability
of the server that supported the archivelet did become an
issue.

FYI, the current home page for Kepler is at:
http://kepler.cs.odu.edu:8080/kepler/index.html

At the bottom of this page is a link labeled:
What happened to the previous version of Kepler?,
http://kepler.cs.odu.edu:8080/kepler/previous-kepler.html

If one follows this link, the page obtained includes this
paragraph:

The first version of Kepler as described in D-Lib Magazine
7(4) is no longer functioning. Users of old Kepler are urged
to upgrade to the new archivelet. The publications that were
previously uploaded via old Kepler are available in the test
group section.

There's a link to the test group, but clicking on it has
yielded, on several occasions, only a 404 (not available)
error message.

So, my experiment with the first version of Kepler was an
interesting one, but I've decided not to repeat it with a
new archivelet. One experience with instability of the
host server was enough for me.

In my previous message, I also asked for advice about
self-archiving a current eprint of mine (it's *not* about
electronic publishing, it's an invited commentary about
cancer-related electronic support groups). It's currently in
preprint form, and I'd prefer not to self-archive it until
it's in postprint form.  As I mentioned in my previous
message, I'm retaining copyright, and the right to
self-archive the postprint version (if it's accepted for
publication, after peer-review by the toll-access journal to
which it's been submitted). But, where to self-archive the
postprint?

As I mentioned previously, my university has a community-
based eprint repository, but I'm not a member of any of the
current communities. (BTW, the new Kepler archivelets are
also, I believe, community-based).

My eprint also isn't suitable for the Quantitative Biology
section of the arXiv repository. What about CogPrints?

Stevan responded:

[sh] Does it not look compatible with any of the following
[sh] existing CogPrints subject categories?
[sh] http://cogprints.ecs.soton.ac.uk/view/subjects/
[sh]
[sh] * Electronic Publishing
[sh]   o Archives (34)
[sh]   o Copyright (12)
[sh]   o Economics (21)
[sh]   o Peer Review (16)

No, it doesn't. However, it does contain a section about
Internet research ethics (in the context of research
involving cancer-related electronic support groups).

So, maybe it might not be entirely ridiculous to include it
in this CogPrints subject category:

* Philosophy
   o Ethics (18)

Jim Till
University of Toronto


Re: Free Access vs. Open Access

2004-01-03 Thread Jim Till
On Fri, 2 Jan 2004, Barbara Kirsop wrote [in part, on the
Subject: Re: Free Access vs. Open Access]:

[bk] The present discussions on the AmSci forum on whether
[bk] 'open' is the same as/different from 'free' access and
[bk] comparing this with the need to feed the starving now
[bk] or wait a bit til everyone can have 'organic' food is
[bk] spot on. I reflect that these discussions, erudite and
[bk] entertaining as they are, are of little interest to
[bk] science in the developing world. Scientists (and
[bk] patients with malaria) in the developing world need
[bk] the information now, asap, in any format that can best
[bk] be provided, don't wait til everything is perfect, just
[bk] do it. And science in the developed world equally needs
[bk] the highly relevant research from the developing
[bk] regions now - though it mostly doesn't recognise this
[bk] knowledge gap.

Thanks for this eloquent summary of the global health argument
in favour of open access.

I must confess that I've not read every word of every message
in the interesting thread on 'open' vs. 'free' access. Has
anyone who has contributed to this thread proposed a revised
definition of open access? Or, is the debate mainly about how
best to implement the BOAI definition? See:
http://www.earlham.edu/~peters/fos/boaifaq.htm#openaccess

By 'open access' to this literature, we mean its free
availability on the public internet, permitting any
users to read, download, copy, distribute, print, search,
or link to the full texts of these articles, crawl them
for indexing, pass them as data to software, or use them
for any other lawful purpose, without financial, legal,
or technical barriers other than those inseparable from
gaining access to the internet itself. The only
constraint on reproduction and distribution, and the only
role for copyright in this domain, should be to give
authors control over the integrity of their work and the
right to be properly acknowledged and cited.

If anyone is proposing a revised definition, then what
is it?

Jim Till
University of Toronto


Re: Be prepared for commercial misuse of the term open access

2003-10-31 Thread Jim Till
On Fri, 31 Oct 2003, [identity removed] wrote [in a
message forwarded by Stevan Harnad]:
 How strange - if you go to the Nature Immunology website
 the words Open Access appear in the left column - not
 quite sure what it refers to - any ideas?
 http://www.nature.com/ni/

Re http://www.nature.com/ni/:

The heading Open Access in the frame on the left refers to
information that is openly accessible via this site (e.g.,
under Information, information for authors, and under
Impact Factor, the impact factor for the journal for 2002
 2001). The heading Open Access is in contrast to one
above it, which is Registered Users (e.g., under
Archive, there's access to an archive of past issues, and
only short abstracts are openly accessible).

Jim Till
University of Toronto


Re: The True Cost of the Essentials (Implementing Peer Review)

2003-01-16 Thread Jim Till
On Wed, 15 Jan 2003, Fytton Rowland wrote [in part, on the
Subject: Re: Nature's vs. Science's Embargo Policy]:

[fr] A review study that I undertook last year suggests that
[fr] the true figure is closer to the $500 than the $1500,
[fr] assuming a rejection rate of 50%. If rejection rates
[fr] are very high, as in Manfredi la Manna's example, then
[fr] the cost per *published* paper is higher.  However, one
[fr] has to ask whether, in a paperless system, rejection
[fr] rates need to be so high!

Fytton, are the results of your review study openly accessible?
If so, where?

About rejection rates: Zukerman and Merton (1971) reported
substantial variation, with rejection rates of 20-40% in the
physical sciences, and 70-90 percent in the social sciences and
humanities:
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/1107.html.

A much more recent study by ALPSP yielded results that appear
to be consistent with the earlier data:
http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/1127.html.

I'd predict that, in a paperless system, rejection rates will
continue to vary across disciplies. If this prediction is
correct, then costs per published paper will also vary across
disciplines.

Jim Till
University of Toronto


Re: The True Cost of the Essentials (Implementing Peer Review)

2003-01-16 Thread Jim Till
On Thu, 16 Jan 2003, Andrew Odlyzko wrote [in part]:

[ao] The recent postings to this list about rejection
[ao] rates and costs of peer review point out yet
[ao] another way that costs can be lowered:  Elimination
[ao] of the wasteful duplication in the peer review system.

Publishers of several journals can achieve economies of scale
by using the same staff to oversee multiple journals.

Economies of scale for the peer reviewers would require
centralized peer review for a particular field or
discipline. This approach has been tested in Canada by
the Canadian Breast Cancer Research Initiative (CBCRI):
http://www.breast.cancer.ca/. The Canadian Institutes
of Health Research (CIHR), the National Cancer Institute of
Canada (NCIC), the Canadian Cancer Society, Health Canada,
and the Canadian Breast Cancer Research Foundation (CBCF),
all use the same peer review system (that of the NCIC)
for the evaluation of research proposals submitted directly
to the CBCRI.

However, this hasn't really achieved much economy of scale,
because some of these agencies (NCIC, CIHR, CBCF) also, for
what I think are good reasons, also peer-review those
breast-cancer applications that are sent directly to them,
rather than to the CBCRI. The individual research teams
make the decision (and some choose, again for what I think
are good reasons) to submit essentially the same application
to more than one of these various agencies.

Different peer-review committees judge quality according
to somewhat different criteria, and involve committee
members who may be true peers in relation to one aspect of
as research field or discipline, but not in relation to
another. The mix of expertise matters.

So, many research teams prefer to have an opportunity to
take more than one kick at the can. If peer-review is
regarded as a process of weighted randomization, then, from
the point of view of an individual research team, the
probability of successfully obtaining support is increased
if multiple applications are submitted.

The situation isn't very different for peer-review of
research reports, except that the number of peers
involved in the review process is usually much smaller
(e.g. 2 or 3 people, instead of about 10). The smaller
the number of reviewers, the greater the variance in the
score or rating of perceived quality.

Jim Till
University of Toronto


Re: Draft Policy for Self-Archiving University Research Output

2003-01-12 Thread Jim Till
On Sun, 12 Jan 2003, Stevan Harnad wrote [in part]:

 It is no longer possible to close that door. For where journal
 publishers explicitly refuse to allow self-archiving, thereby
 broadcacting that they do not share their authors' goal of maximizing
 their research impact, the preprint-plus-corrigenda strategy --
 http://www.eprints.org/self-faq/#publisher-forbids -- is still available
 to all authors for attaining almost exactly the same end -- while
 implicitly naming-and-shaming, each time that strategy needs to be used,
 those publishers who thereby advertise that for them maximizing their
 potential revenue streams is more important than maximizing the
 potential impact of the research they publish.

Stevan, this point seems to me to be quite an important one, in relation
to the evaluation of 'best practices' designed to promote open access to
the peer-reviewed research literature, and thus to bring us measurably
closer to the tipping point. It would be very helpful, I think, if you
could provide some real-life examples of cases where the
preprint-plus-corrigenda strategy has already been used to implicitly
name-and-shame a major publisher. Do you have a favorite example?

Jim Till
University of Toronto


Re: Nature's vs. Science's Embargo Policy

2003-01-10 Thread Jim Till
My own conclusion is that Nature's policy continues to be less
restrictive than that of Science. Science's policy (Sp):
http://www.sciencemag.org/feature/contribinfo/faq/prioronline_faq.shtml

[Sp] What about manuscripts that have been posted online
[Sp] before submission?
[Sp]
[Sp] We do not consider manuscripts that have been previously
[Sp] published elsewhere, including those published on the Web.
[Sp] Posting of a paper on the Internet may be considered prior
[Sp] publication that could compromise the originality of the
[Sp] Science submission. Thus, if you are planning to submit
[Sp] your paper to Science, it should not be posted online.
[Sp]
[Sp] We allow posting of manuscript copies of papers at
[Sp] not-for-profit publicly funded World Wide Web archives
[Sp] immediately upon publication. We also provide a free
[Sp] electronic reprint service to authors that allows access
[Sp] to their formatted and proofed paper on Science Online.

Perhaps Nature, a high-impact, for-profit journal, has gone about
as far as it's willing to go? But, what about Science? What
additional pressures be exerted on an Association (one that's
supposed to be fostering the advancement of science) in order for
it to accept an embargo policy similar to that of Nature?

Perhaps both Science and Nature need to be subjected to
head-to-head competition from an open access journal that
has an analogously broad scope (such as a PLoS Research
journal?)?

Jim Till
University of Toronto


PLoS Biology

2003-01-04 Thread Jim Till
One of the two new peer-reviewed open-access journals that the Public
Library of Science (PLoS) plans to launch has the working title PLoS
Biology; see http://www.publiclibraryofscience.org/journals.htm.

This journal will compete head-to-head with the leading existing
publications in biology...

It's also noted, at:
http://www.publiclibraryofscience.org/openaccess.htm
that:

Since 1999, the London-based publisher BioMed Central has published a
diverse group of peer-reviewed, open-access biomedical research
journals, and offered publication services to scientific groups and
societies who wish to launch new open-access publications. They are a
strong ally of PLoS.

But, one of BMC's top-level new journals is the Journal of Biology, edited
by Martin Raff http://www.jbiol.com/: Journal of Biology, an
international journal publishing biological research articles of
exceptional interest and importance, published by BioMed Central.

Can anyone explain to me why the Journal of Biology and PLoS Biology
won't be in head-to-head competition?

Jim Till
University of Toronto


Higher rate of citation

2002-11-30 Thread Jim Till
On Fri, 29 Nov 2002, Jan Velterop wrote [in part, on the
Subject: Re: UK Research Assessment Exercise (RAE) review]:

jv Little wonder that scientists are often not aware of the issues of
jv serials crises and open access solutions. If they were, many would
jv be likely to take an attitude to publishing their research that is
jv similar to their attitude towards scientific problems: experiment
jv and 'push the envelope'. The theory and the hypotheses are clear.
jv And experimental results are now, slowly but steadily, becoming
jv available, such as a generally higher rate of citation for articles
jv that are freely accessible to anyone.

Is there an (openly-accessible) summary of the evidence that supports the
hypothesis that openly-accessible research reports generally (i.e. in
several quite different disciplines) attract higher citation rates?

If such a summary exists, I'd like to know about it. It would be helpful
to me in my local OA (and FOS) advocacy efforts.

Jim Till
University of Toronto


Re: Higher rate of citation

2002-11-30 Thread Jim Till
My thanks to those who pointed to Steve Lawrence's work, based on
conference articles in computer science and related disciplines:

http://www.neci.nec.com/~lawrence/papers/online-nature01/
Online or Invisible?, Steve Lawrence, NEC Research Institute

An edited version appears in: Nature, Volume 411, Number 6837, p. 521,
2001:

http://www.nature.com/nature/debates/e-access/Articles/lawrence.html
Free online availability substantially increases a paper's impact.

An excerpt:

   The results are dramatic, showing a clear correlation between the
   number of times an article is cited and the probability that the
   article is online. More highly cited articles, and more recent
   articles, are significantly more likely to be online, in computer
   science. The mean number of citations to offline articles is 2.74,
   and the mean number of citations to online articles is 7.03, an
   increase of 157%.

If these dramatic results (for conference articles in computer science and
related disciplines) could be confirmed in other unrelated disciplines,
then the evidence would become even more compelling.

Another question: has anyone obtained evidence that the impact factors for
open-access journals have increased across time, in comparison with
competing toll-access journals?

Jim Till
University of Toronto


Re: Garfield: Acknowledged Self-Archiving is Not Prior Publication

2002-09-12 Thread Jim Till
On Thu, 12 Sep 2002, Stevan Harnad wrote:

 Publishers are essential contributors to the implementation of peer
 review, but their art and skill does not lie in the making of the
 judgments. Those judgments are made by the peer-reviewers --
 researchers who give away their services for free, just as the authors
 are researchers who give away their research papers for free.

I have no disagreement about the major issues here, two of which could be
identified as: 1) that researchers supported by public funds should make
their research reports freely available; and, 2) that peer-reviewers and
editors do make crucial contributions to the research and scholarly
literature.

I have a minor quibble: the judgments about peer-review are (obviously!)
indeed made by the peer-reviewers.  But, the judgments about how to deal
with the peer-reviewers' comments and criticisms (if any) are made by the
editorial staff of a journal.  One can have the situation where the
peer-reviewers have only minor comments, but the editors may still delay
publication for long periods, or even reject a report, for a variety of
reasons.  At their best, these editorial decisions make a very positive
contribution to quality control.  At their worst, they become an
unacceptable form of censorship.

Unfortunately, it's sometimes very difficult (e.g. when dealing with
experimental results, or concepts, that subsequently lead to paradigm
shifts) for readers as well as editors, to distinguish between desirable
quality control and unacceptable censorship.  In the latter situation, the
self-archiving of preprints becomes a means to reduce the harmful effects
of such censorship.  But (perhaps fortunately?!) such paradigm shifts
are quite rare.

Jim Till
University of Toronto


Re: Science Article (Roberts et al.) and Science Editorial

2002-07-15 Thread Jim Till
On Mon, 15 Jul 2002, Stevan Harnad wrote [in part, in response to a
question from Ingemar Bohlin]:

[sh] We [at AAAS/Science] have decided to make our own back research
[sh] reports and articles freely available after 12 months--at our
[sh] own Web site--later this year.
[sh]
[sh] The Editors, Science (2001) Is a Government
[sh] Archive the Best Option? Science 291: 2318b-2319b
[sh] http://www.sciencemag.org/cgi/content/full/291/5512/2318b
[sh]
[sh] Amsci thread:
[sh] http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/1236.html
[sh]
[sh] I have heard nothing since that editorial. Perhaps someone else
[sh] has?

What I found, by searching Science magazine (via:
http://www.sciencemag.org/search.dtl) for editorials by Donald Kennedy,
were (in addition to the editorial referred to be Stevan):

http://www.sciencemag.org/cgi/content/summary/295/5552/13
Another Year for Science,
Donald Kennedy,
Science 2002 January 4; 295: 13

An excerpt:

We made the decision last year, as a service to the scientific community,
to release the full content of our Reports and Research Articles after 12
months on Science Online.

See also:

http://www.sciencemag.org/cgi/content/summary/294/5549/2053
Science and Development,
Donald Kennedy,
Science 2001 December 7; 294: 2053

An excerpt:

SciDev.Net is the brainchild of David Dickson, a former editor at Nature.
Science is delighted to be a full partner with Nature in this venture; for
the right cause, competitors can work together! The site, now up at
www.scidev.net, will have timely science news, several reports or short
items from Science and Nature each week, opinion features, and information
about meetings, grants, and jobs. The emphasis, obviously, will be on
material relevant to the developing world. Science normally makes papers
freely available on our Web site 1 year after publication; we will be
making this selected material available immediately. Regional gateways
will allow SciDev.Net to tailor some information to particular settings
and circumstances.

Please note the sentence: Science normally makes papers freely available
on our Web site 1 year after publication.

I tried to find something similar in Nature using the search engine (at:
http://www.nature.com/dynasearch/app/dynasearch.taf?site_source=nature),
but wasn't successful.

Jim Till
University of Toronto


Re: Self-archiving, academic staff, universities intellectual property

2002-06-23 Thread Jim Till
On Sun, 23 Jun 2002, Stevan Harnad wrote [in part, in response to
Richard Poynder]:

 What one wishes to conceal, one does not publish; what one publishes,
 one does not wish to conceal. The only research at issue here is the
 kind researchers wish to publish: the peer-reviewed research
 literature.)

In the biomedical field (and, especially, in the area of biotechnology),
the issue of filing for patent protection of intellectual property prior
to publication in the peer-reviewed research literature can be a
significant one (if a possible invention is involved).

At present, I suspect that even academically-oriented research groups,
institutes and networks in this field would expect authors to file for
patent protection of IP before self-archiving a preprint (e.g. one that's
intended for subsequent publication in the peer-reviewed literature).

There can be delays involved in the process of filing for patent
protection.  This can be a barrier in Canada, which uses a first-to-file
approach to the protection of IP.  The tech-transfer office of the
university (or research institute or research network) needs to be
well-supported, so that a backlog of requests for attention doesn't pile
up, and self-archiving of preprints (or, submission to an open-access
peer-reviewed journal, such as BioMed Central's new Journal of Biology)
isn't unduly delayed (e.g. for some weeks, or even months).

BTW, the Journal of Biology (see: http://www.jbiol.com/) permits copyright
to be retained by the author.  The editorial board of the Journal of
Biology includes names of people who've played major roles in efforts to
provide open access to the health-related research literature, such as
Harold Varmus, currently at Memorial Sloan-Kettering Cancer Center, USA.

I have no involvement with BioMed Central or the Journal of Biology
(although I do know, and greatly respect, the latter's Editor-in-Chief,
Martin Raff, of University College London, UK).

Jim Till
University of Toronto



Kepler archivelets

2002-06-15 Thread Jim Till
A letter of mine has recently appeared in the printed version of the
June/July issue of University Affairs (a publication of the Association
of Universities and Colleges of Canada; it's distributed to faculty
members across Canada).  My letter is in response to an article entitled
Publishing freestyle, that appeared in the May issue of the same
publication.  The original article is freely available online (via:
http://www.aucc.ca/en/archbody.html#may02 ).  Comments about the Budapest
Open Archives Initiative, and about Stevan Harnad's efforts, are included
in the original article.  Here's an excerpt:

Stevan Harnad, who holds a Canada Research Chair in perception and
language at the Université du Québec à Montréal and founded the
influential journal, Behavioral and Brain Sciences, is a huge booster of
open-access scholarly publishing. He is one of the principal signatories
of the Budapest Open Access Initiative, which has as its goal the free and
unrestricted availability of all scholarly research online.

An online version of my letter isn't available via the www.aucc.ca
website, but I've deposited an open-access copy in my personal Kepler
archivelet.  It's already been cached by the Kepler Service Provider,
and is available via: http://makeashorterlink.com/?P1E015F01

An excerpt from my letter:

Another intriguing example of the kind of academic research that
is currently under way in this field is the Kepler project of
the Digital Library Research Group at Old Dominion University
in Norfolk, Virginia (see: http://kepler.cs.odu.edu/ ). The
intent of this project is to develop digital electronic archives
for individuals (archivelets).

Jim Till
University of Toronto


Re: Is The Feel of Paper Immortal?

2002-01-06 Thread Jim Till
The threads about access, dissemination, peer-review costs
and preservation raise another issue: what about the
future of journals?

There's a comment about this in the Jan. 5 issue of the
electronic version of BMJ (freely accessible, via:
http://bmj.com/cgi/content/full/324/7328/5 ).

The BMJ: moving on
Richard Smith
BMJ 2002; 324: 5-6

An excerpt:

[rs] The web has advantages of speed, reach, interactivity,
[rs] and infinite space, but paper has the advantages of
[rs] readability, portability, and attractiveness. The
[rs] future is not paper or electronic but paper and
[rs] electronic.

Jim Till
University of Toronto


Re: Is The Feel of Paper Immortal?

2002-01-06 Thread Jim Till
A clarification: the prediction referred to by Stevan was
made by Richard Smith, the editor of the BMJ (and, I
believe, the chief executive of the BMJ Publishing Group?)
not by me.  I'll attach an excerpt of the entire paragraph,
so that the context for his prediction may become more
apparent.

My crystal ball is no clearer than that of anyone else,
but perhaps, because the BMJ is *already* freely available,
it's already ahead of many other journals, and it must
consider a somewhat different future -  one that hasn't
arrived yet for those other (not-yet-freely-available)
journals?  For some of those other journals, such a future
may never arrive - they may die before it does!

Jim Till
University of Toronto

A more complete excerpt:
(see: http://bmj.com/cgi/content/full/324/7328/5 )

[rs] Other changes in the journal are based on the idea of
[rs] using paper and electronic media to maximum advantage.
[rs] bmj.com is the BMJ in that it contains everything
[rs] published in the paper version plus much more. It
[rs] also now has many more readers than the paper BMJ.
[rs] We print about 110,000 copies of the weekly BMJ, and
[rs] more than 90,000 circulate in Britain. bmj.com has
[rs] around 150,000 visitors a week and nearly 400,000 in
[rs] a month, most of them from outside Britain. The web
[rs] has advantages of speed, reach, interactivity, and
[rs] infinite space, but paper has the advantages of
[rs] readability, portability, and attractiveness. The
[rs] future is not paper or electronic but paper and
[rs] electronic.

On Sun, 6 Jan 2002, Stevan Harnad wrote:

 On Sun, 6 Jan 2002, Jim Till wrote:

  http://bmj.com/cgi/content/full/324/7328/5
 
  [rs] The web has advantages of speed, reach, interactivity,
  [rs] and infinite space, but paper has the advantages of
  [rs] readability, portability, and attractiveness. The
  [rs] future is not paper or electronic but paper and
  [rs] electronic.

 Just a slight correction:

 The PRESENT is paper and electronic.

[remainder snipped]


Re: Elsevier's ChemWeb Preprint Archive

2001-09-06 Thread Jim Till
My thanks to James Weeks for taking the time required to reply to my
previous request for comments.  On Thu, 6 Sep 2001, James wrote:

[jw] The CPS should indeed satisfy the inter-operability criterion when
[jw] we achieve compliance with the Open Archives Initiative. It is our
[jw] intention that the CPS will be compliant at the start of October.

You had mentioned this plan in a previous message.  Good news that this
goal might be accomplished within about a month.

[jw] I also agree that the views and ranking statistics could provide
[jw] indicators for the impact of a particular preprint. By a citation
[jw] data indicator, I understand that the impact would be ascertained
[jw] by examining the number of other papers (both inside and outside the
[jw] server) which cite that preprint. This is an interesting idea and I
[jw] would certainly like to like to learn more about how this could be
[jw] achieved.

I hope that other participants in this forum (who are much better-informed
than I am about how best to pursue this approach to an assessment of
impact) will provide some comments.

[jw] For the sign-posting criterion, I agree that it is important to
[jw] provide authors with the ability to link to the published version.

[snip]

[jw] After the first version of the preprint has been submitted, the
[jw] author (and only the author) is presented with three hyperlinks when
[jw] they access their article page: 1) Add more supplementary files;
[jw] 2) Revise the full text of the preprint; 3) Redirect to the
[jw] published article. This redirection is achieved using the LitLink
[jw] technology of MDL Information Systems.

If I understand correctly, these options are not mutually-exclusive?
You then commented (re the 3rd option):

[jw] When users then view the article page, they are presented with a
[jw] Published full text link.  When this link is accessed, LitLink
[jw] resolves the citation and finds from where the article may be
[jw] downloaded. Clearly, if this is from a publisher's website, users
[jw] would typically have to pay for access. However, all of the other
[jw] information - including the preprint meta-data and any other files
[jw] uploaded to the server - do of course remain completely free to
[jw] access on the CPS.

So, if authors choose the 3rd option, a link to the published version is
added to the preprint that's posted at the CPS.  Am I correct to conclude
that, when the 3rd option is chosen by an author, the original full text
of the preprint (plus any supplementary files) can still be accessed on
the CPS?

Can you easily measure what proportion of authors have (so far) chosen the
3rd option?  Of those authors whose preprints that have subsequently been
published in the peer-reviewed literature, I wonder what proportion have
chosen the 3rd option, what proportion have added the relevant hyperlink
into the discussion thread for their own preprint, and what proportion
have done nothing about providing a link to the published version?

[jw] In terms of this sign-posting, I do think that it is equally
[jw] important that other authors link back to references which appear on
[jw] preprint servers.

[snip]

[jw] ...the article is also given a friendly URL -
[jw] http://preprint.chemweb.com/category/YYMMNNN. If a user accesses
[jw] this URL they are taken directly to the article, without having to
[jw] first browse through the server. In this way, it is easy for
[jw] authors to reference the CPS preprints.

I like the shortness of such a URL; it is, indeed, friendly!

James, thanks again for your very interesting comments.  As you can see,
about all that I've contributed in this response is some more questions!

Jim Till
University of Toronto


Re: Update on Public Library of Science Initiative

2001-09-04 Thread Jim Till
On Tue, 4 Sep 2001, Peter Suber wrote [in part]:

[ps] ICAAP can provide
[ps] technology and support for new free online journals.  It can work
[ps] with PLoS to extend its initiative to the humanities and social
[ps} sciences.

It seems to me that a major long-term advantage of PLoS is that the
initiative could be extended to the humanities and social sciences.

BioMed Central already offers a set of freely-accessible online
journals, and is inviting signers of the PLoS open letter to submit
research papers (see below).  However, unless BioMed Central is not
only very successful, but also changes its name, it's unlikely to be
extended to the humanities and social sciences in the near future.

The following currently appears on the home page of the BioMed Central
website (at http://www.biomedcentral.com/):

[BMC] Public Library of Science Deadline:
[BMC] We wish to inform those who signed the open letter of the
[BMC] Public Library of Science advocacy group, which called for
[BMC] publishers to make research papers freely available, that
[BMC] BioMed Central complies with all the requirements listed
[BMC] in the letter. We invite all signatories to publish their
[BMC] research in our journals.

Jim Till
University of Toronto


Re: Elsevier's ChemWeb Preprint Archive

2001-08-24 Thread Jim Till
A comment in response to previous messages from James Weeks:

In a message that I posted to this forum on 24 May 2001, on the subject
Re: ClinMed NetPrints, I tried to outline three criteria (or, 'design 
usability guidelines'?) for an eprint archive:

1) an 'inter-operability' criterion;
2) an 'impact-ranking' criterion;
3) a 'sign-posting' criterion.

James, in the message that you posted to this forum on August 17, you
indicated that there are plans for the Chemistry Preprint Server (CPS) to
be OAI-compliant with the next two months.  If that goal is accomplished,
the first ('inter-operability') criterion will (I assume) have been met.

Perhaps one way to begin the meet the second ('impact-ranking') criterion
is to provide the kind of views and ranking indicators that are
commented upon in the message that you posted on August 23.

Better, though, might be the suitability of the eprint server for yielding
citation data?  As Tim Brody pointed out (as part of a previous thread, in
a message posted on May 24, on the subject: Re: ClinMed NetPrints):

[tb] Which, from my technical point of view, is the reference lists for
[tb] the articles. As far as I'm aware no archives currently do this (I
[tb] know cogprints provides the facility for authors to give this
[tb] information, but does not re-export yet). Watch developments from
[tb] OpCit!

Re the third ('sign-posting') criterion, it's possible for the author of a
preprint posted at the CPS website to add, as part of a discussion thread,
a citation to the published version of the preprint.  (There will, of
course, usually be a time delay between the posting of the preprint and
the appearance of the published version, unless the 'preprint' is, in
fact, a post-print).

The arXiv server, in contrast, provides (what seems to be) a quite
convenient means for authors to add, to their previously-posted preprint,
a citation (and a link) to the published version.

James, might you be willing to comment on these three proposed criteria
(or, guidelines) for eprint servers, and on their relevance to the CPS?

Jim Till
University of Toronto


Re: Reasons for freeing the primary research literature

2001-08-19 Thread Jim Till
On Sat, 18 Aug 2001, Stevan Harnad wrote:

 On Fri, 17 Aug 2001, Jim Till wrote:

[jt] Re (1d): please bear in mind that a definition of the verb
[jt] censor is make deletions or changes in.

[sh] Peer review certainly is not censorship.

I seem to have touched a nerve when I used the eye-catching word
censorship in my list of should reasons.  Yes, Stevan, peer review
*is* a form of censorship.  But, it's one that's usually justified on the
basis of changes/deletions intended to enhance perceived quality (however
defined).  And yes, I already know that you, as the moderator of this
forum, prefer that it not be used for discussions about peer review.
Such discussions can divert attention from what I think we agree is a
major issue: freeing of the good-quality (however defined) primary
research literature.

[sh] (Deception would be involved in violating the Ingelfinger Rule
[sh] which announces that the journal will not REFEREE a paper that has
[sh] been self-archived on the Web. Here I unhesitantly (and
[sh] unpenitantly!) recommend ignoring this unnecessary, unjustifiable
[sh] and unenforceable rule, which, unlike copyright, has no force of
[sh] law, but is instead merely an arbitrary submission policy, like
[sh] declining to referee papers by authors who have blue-eyed uncles --
[sh] on which I would likewise recommend deception... )

Again, I seem to have touched a nerve with my use of another eye-catching
word, deception.  I'll touch that nerve again by arguing that the use of
deception in any way (including its use as a way to avoid the Ingelfinger
Rule) is ethically-questionable (even if the Ingelfinger Rule is
legally-unenforceable).  Use of deception requires strong justification.
Do the circumstances that are being debated here provide strong enough
justification?  Stevan seems entirely convinced that they do.  I'm not so
sure.

Jim Till
University of Toronto

P.S.: By the way, I don't think that these issues are silly, or
nonsense.  The use of such words does touch nerves of my own (although I
realize that it's just rhetoric (language designed to persuade or
impress).

Stevan also wrote, at the end of another message that's part of this same
thread:

 My temptation to agree is tempered by my sure knowledge that one could
 have substituted my own name for Albert's in the above statement, with
 equal truth (and likely to elicit at least as many echo-endorsements
 from others!)...

Because of my comments above (about my own nerves being touched) I can't
resist the temptation to add a comment here: sometimes, there's a need to
moderate the moderator!  But - that would be censorship [smile].  (Hmmm -
is it getting a little warm in this particular kitchen?)


Re: Reasons for freeing the primary research literature

2001-08-17 Thread Jim Till
As is his custom, Albert Henderson has focused his attention on his own
perception of only one of the reasons (the Library crisis) included in
my short list of major reasons why the primary research literature should
be freed (see below).

So far, no novel reasons have been mentioned.  Are there any?

Jim Till
University of Toronto


On Thu, 16 Aug 2001, Albert Henderson wrote [in part]:

[ah] on Sat, 11 Aug 2001 Jim Till t...@uhnres.utoronto.ca wrote:

[jt] But, what about reasons WHY the primary research literature should
[jt] be freed?  Here's my first attempt at a summary of some of the main
[jt] reasons:
[jt]
[jt] 1.  It should be done:
[jt]
[jt]  - Information gap: Libraries and researchers in poor countries
[jt] can't afford most of the journals that they need.
[jt]
[jt]  - Library crisis: Libraries and researchers in rich countries
[jt] can't afford some of the journals that they need.
[jt]
[jt]  - Public property: The results of publicly-funded research
[jt] should be publicly-available.
[jt]
[jt]  - Academic freedom: Censorship based on cost rather than
[jt] quality can't be justified.

[ah][snip]

[jt] What other important reasons have I neglected?

[ah] The most important motive behind the self-archiving
[ah] argument is that universities wish to unload the
[ah] profit-sapping burden of conserving knowledge. They
[ah] wish to reduce, perhaps eliminate, spending on
[ah] libraries.

[remainder snipped]


Re: Reasons for freeing the primary research literature

2001-08-17 Thread Jim Till
On Thu, 16 Aug 2001, Arthur Smith wrote [in part]:

[jt (1d)]- Academic freedom: Censorship based on cost rather than
[jt] quality can't be justified.

[as] (1d) I'm afraid I don't understand - can you describe a scenario
[as] where cost is involved in censorship somehow?

My proposed four main reasons why the primary research literature should
be freed were, in brief:

(1a) Information gap; (1b) Library crisis; (1c) Public property; and,
(1d) Academic freedom.

Re (1d): please bear in mind that a definition of the verb censor is
make deletions or changes in.

I can think of a number of researcher-side (and also of end-user-side)
examples of cost barriers to the dissemination of the (high-quality)
primary research literature.  Here's one example of such a scenario,
within the context of the author-give-away literature.  That is, the
author doesn't want to make a profit.  The author simply wants to give a
publication away.

Scenario: The top brand-name journal in the field (one that has, as it's
explicitly-stated primary role, the advancement of a particular research
discipline), has peer-reviewed a preprint and finds it acceptable for
publication as it is.  But, the journal doesn't have (for reasons of
cost/revenue) an electronic version that's freely available online.  And
(again, for reasons of cost/revenue) this same journal won't accept the
preprint for publication if it's already been self-archived by the author.
Also (for the same cost/revenue reasons), it won't permit post-publication
self-archiving in any open archive.  And, when asked to do so, it refuses
to modify it's current licence to publish agreement, one which forbids
post-publication self-archiving by the author.

And: the author's own peers and host institution regard anything not
published in this particular top brand-name journal as second-rate in
quality (even if, in the view of that same journal's own peer-reviewers,
the preprint is actually first-rate).

What should the author do, in order to avoid this (cost/revenue-based)
dissemination barrier?  Some possible options: (i) Thank the journal for
peer-reviewing the preprint, and simply self-archive it in an open
archive, together with a comment that it was considered to be acceptable
for publication by the brand-name journal (how to validate such a claim?).
(ii) Self-archive the preprint, but not inform the brand-name journal
(requires deception). (iii) Withdraw the submitted preprint, and re-submit
it to a lower-impact journal that either has a version that's
freely-available online, or permits open self-archiving of preprints
and/or postprints.

The third alternative (which is the one that I'd personally prefer)
results, I'll argue, in a form of censorship.  First, the article has been
deleted from (because it didn't enter into) the top-quality brand of
primary research literature, for reasons based on cost/revenue, not
quality.  Second, it's dissemination has been significantly delayed, again
simply for reasons of cost/revenue, not quality.  Perhaps these particular
consequences won't be regarded as serious enough to justify use of the
word censorship?  Is there another word that might be more appropriate?
Blockage? Interference?

  2.  It can be done:
 
 That's debatable (as we've been doing here for some time). But even so,
 because something can be done, is that a reason it should be? I thought
 you were listing problems to be solved, not solutions in search of
 problems...

Please note that my should reasons preceded my can reasons.  Problems
that should be solved, and can be solved (I'll argue) merit inclusion in
an A-level category, distinct from those problems that: B) should be
solved, but can't, and, C) can be solved, but shouldn't.

Jim Till
University of Toronto


Re: Self-Archiving Refereed Research vs. Self-Publishing Unrefereed Research

2001-08-16 Thread Jim Till
On Fri, 10 Aug 2001, Arthur Smith wrote [in part, in response to a message
from Stevan Harnad]:

[sh] [...]
[sh] Self-archived eprints can be designed to carry health
[sh] warnings that are as shrill as we like a priori (or, more sensibly,
[sh] a posteriori, once we get an idea of the size of the bogus paper
[sh] problem -- if there is any).

[as] With medicine we are talking about lives that can be lost; I don't
[as] think a posteriori is good enough if we're seriously hoping that
[as] self-archives will be an appropriate means of distributing
[as] information to the final practitioners. And if it ISN'T an
[as] appropriate means of distributing the information, and the final
[as] end-users of the information ignore it in favor of traditional
[as] distribution of articles through journals, then where is the
[as] motivation for the authors to self-archive (i.e. if they are not
[as] reaching any more readers than they otherwise would)?

I suspect that an informed discussion of the rather complex issues of
'E-Health' (including open online access to good-quality health-related
information), is well beyond the intended scope of this particular forum.

A consumer-oriented perspective: censorship on the grounds of protecting
gullible patients and their families from untrustworthy health-related
information is increasingly less easy to justify.  We're well into an era
where patients often bring long lists of questions and comments to
consultations with health care professionals, based on information
obtained online.  Some of this information should be regarded as jewels,
and some as junk.  But, how to be sure which is which?

In this new era, some well-informed patients and their families may know
more about their own particular health problems, and about the
currently-available interventions for managing them, than do many
primary-care physicians (and especially, physicians who haven't been
diligent about their continuing medical education).

We're also well into an era of shared decision-making, where patients and
families (if they wish to) may decide which option to choose, from among
several options offered to them by suitably-qualified health care
professionals.

In addition to physicians, there are many other end-users of
health-related information.  Examples are patients and their families,
people at higher-risk of particular health problems, opinion-leaders in
the media and those involved in health-related advocacy roles, etc., etc.
How best to meet the diverse needs of these various end-users?

Medical journals will, I predict, continue to play an important role in
adding value to the primary research literature, e.g. by helping to
convert data and information into knowledge, in various forms appropriate
for the various end-users.

But, we've definitely entered a new era.  Which parts of the old
(pre-E-Health) models for health care will be retained, and which will be
abandoned?  My crystal ball is no clearer than that of anyone else.  But,
it does seem certain that the models will continue to change - including
the models for the translation of new data and information into
practically-useful knowledge.  The issue of freedom of access to the
primary research literature is, I believe, just one important aspect of
this much bigger (and rapidly-evolving) picture.

Jim Till
University of Toronto


Re: Elsevier's ChemWeb Preprint Archive

2001-08-15 Thread Jim Till
On Tue, 29 May 2001, I posted  message to this Forum which included this
criticism of the Chemistry Preprint Server (CPS):

[jt]Another flaw [is that the CPS isn't] on the list of Open Archives
[jt](at: http://oaisrv.nsdl.cornell.edu/Register/BrowseSites.pl).
[jt]So, the CPS archive doesn't meet an inter-operability criterion.

I've just received a copy of Volume 4, Issue 33 of the ChemWeb.com News
Bulletin (distributed to subscribers via email on August 15, 2001).  It's
a special issue in recognition of the establishment of the CPS, a year ago
(on August 21, 2000 - the CPS is at: http://preprint.chemweb.com).

An excerpt from this newsletter:

 The CPS was developed by closely following the Los Alamos archives
 (http://arxiv.org), which cover physics and related disciplines. In
 setting up the service ChemWeb.com has constantly referred to the Open
 Archive Initiative (http://www.openarchives.org) for e-print archives.

In this excerpt, it's emphasized that the design of the CPS server has
been guided by the Open Archives Initiative.  But, it seems to me that a
crucial element, inter-operability of OAI-compliant eprint archives,
probably still isn't being met by the CPS server.

I've just visited the current list of registered OAI-conforming
repositories (see:
http://oaisrv.nsdl.cornell.edu/Register/BrowseSites.pl).

The CPS isn't listed.  So, either it's registered but doesn't appear yet
on the list, it isn't registered, or it isn't fully OAI-conforming.  I
suspect that it isn't fully OAI-conforming.

Have I missed something?

Jim Till
University of Toronto


Reasons for freeing the primary research literature

2001-08-11 Thread Jim Till
There's been much discussion, via this forum, about HOW the primary
research literature might be freed.  (By primary research literature, I
mean original contributions by active and appropriately-qualified
researchers, where new knowledge, such as novel concepts, novel data, or
novel interpretations of existing data, are published).

But, what about reasons WHY the primary research literature should be
freed?  Here's my first attempt at a summary of some of the main reasons:

1.  It should be done:

 - Information gap: Libraries and researchers in poor countries can't
afford most of the journals that they need.

 - Library crisis: Libraries and researchers in rich countries can't
afford some of the journals that they need.

 - Public property: The results of publicly-funded research should be
publicly-available.

 - Academic freedom: Censorship based on cost rather than quality
can't be justified.

2.  It can be done:

- Open archives: Authors can self-archive their publications in open
archives.

- Cost issues: Both electronic journals and open archives can be
funded in a variety of ways.

- Branding issues: Essential quality control and certification need
not be sacrificed.

- IP issues: Desirable protection of intellectual property need not
be sacrificed.

What other important reasons have I neglected?

Jim Till
University of Toronto


Re: Kepler: Author-Based Archivelets

2001-06-29 Thread Jim Till
Earlier in this month, I downloaded Kepler via the Kepler home page
(http://kepler.cs.odu.edu), and then set up a personal OAI-compliant
archivelet (till@home) on my PC at home.  (Some email correspondence
with X. Liu and M. Zubair was required before I was able to set up and
register the archivelet successfully - my thanks to them for their help!).

So far, only one document of my own has been posted on my archivelet, and
it's been successfully harvested and cached by the Kepler harvester.  It
can be accessed via the Kepler Search Service, at:
http://kepler.cs.odu.edu:8080/searcharc/search.html

A second document, initially cached by the harvester, is a test document
for the Kepler project; it isn't one that I created.

My own document is a version, in HTML, of my article, Predecessors of
preprint servers, published in Learned Publishing 2001; 14(1): 7-13.
(I've retained copyright).

The original version of the same article, in PDF format, is also freely
available, via: http://www.catchword.com/09531513/v14n1/contp1.htm

The version, in HTML, cached by the Kepler harvester is also archived at:
http://xxx.lanl.gov/html/physics/0102004

I'd initially planned to set up the personal archivelet on the PC at my
office, which has a fast connection to the Internet.  However, it's behind
a firewall, and Kepler (at present) doesn't work behind such a firewall.
So, I used the fast connection at my office to download the zipped Windows
version of Kepler onto a rewritable CD-RW, and unzipped it onto the same
CD.

I then took that CD to my home and ran Kepler directly from the CD, on the
PC at my home.  I was able to register it successfully (and, was able to
avoid the long download time that would have been required at home, where,
at present, I have only a slow dial-up connection to the Internet).  I was
very pleased to learn that, within a day after successful registration,
the contents of my archivelet had been harvested and cached.

Because I have only a dial-up connection to the Internet at home, my
archivelet will be off-line most of the time.  This means that the cached
version is the only one that can be accessed.  I don't plan to modify the
content of the archivelet very often, so the cached version of the
archivelet should usually be quite up-to-date.

Thus, the results of my little experiment with setting up a personal
archivelet have been, so far, very positive.  My only real handicap has
been a lack of much familiarity with proper use of the Dublin core of
metadata, and especially, uncertainty about appropriate use of such
metadata for an OAI-compliant archivelet.

One point that may merit some discussion: I believe that the harvester
will check all of the registered archivelets periodically, and, if
anything is changed, the old version in the cache will be replaced by the
new version.  But, the archivelet can't notify the harvester that
something has changed.

My understanding, on the basis of a brief exchange of personal emails with
Xiaoming Liu, is that, for the limited number of archivelets that exist
now, frequent harvesting poses no big problem.  However, this will become
a problem for a large number of archivelets.  This problem could be solved
if the personal archivelet (data provider) could push any changes to the
harvester (service provider).  But, the OAI protocol is pull based, so a
push approach would involve a different paradigm?

I believe that Xiaoming Liu is now a subscriber to this Forum.  If so, I
hope that he'll correct any misstatements that I may have made about
Kepler, or about the model upon which it, and it's use, is based.

I'm sorry about the length of this message, but it does seem to me to
provide a good case study of one preliminary experience with a very
interesting way (although one that's still in an early experimental phase)
to set up a personal OAI-complaint archivelet.

Jim Till
University of Toronto


Re: Elsevier's ChemWeb Preprint Archive

2001-05-29 Thread Jim Till
As noted in a few previous messages of mine, I'm interested in some of the
features of the Chemistry Preprint Server (CPS, see:
http://preprint.chemweb.com/). (But, I have no connection with CPS, and
I'm not a chemist).

Yesterday (on May 28) I browsed through the CPS archive.  Of the
earliest-posted preprints (a total of 32, posted in July or August 2000,
i.e. 9-10 months ago), I could identify 14/32 = 44% that have subsequently
been published (or, according to the authors, been accepted for
publication, or been published in part) in a brand-name journal.  Of
these, 10/32 = 31% could already be found in the ISI Citation Databases.

Because of the relatively short interval (9-10 months) between the time
when these preprints were posted, and the time when I sought evidence of
subsequent publication, these percentages are very likely to be
underestimates.

I also checked, via the ISI Citation Databases, for one or more
publications by any of the authors or co-authors of the 32 preprints.  I
could identify publications for 28/32 = 87.5%.  So (like BMJ's ClinMed
NetPrints archive) the CPS preprint archive appears to have been used (at
least, initially) mainly by authors who have some previous track record of
publication in journals that are included in the ISI Citation Databases.

The CPS archive includes a feature that permits visitors to rate the
individual preprints on a 1-5 scale.  Of the 32 preprints posted 9-10
months ago, 10 have been rated highly (a 4-star rating; no 5-star
ratings were noted).  Of these 10, 6 have already been published, or
accepted for publication.  This publication rate (60%, so far) is higher
than the rate (8/22 = 36%) for the 22 longest-posted preprints that have
been rated less highly.

But, because the sample size is small, this difference in publication
rates isn't statistically significant at the P = 0.05 level (Fisher's
Exact Test).  At a later time, it will be of some interest to test again
the hypothesis that the rating scale may serve a somewhat useful
impact-rating function (in that the ratings may help readers to find
articles that may be more likely to be published in brand-name
journals).

The CPS archive also provides data about the number of views of each
individual preprint.  Of the 32 longest-posted preprints, 17 received more
than 300 views. Of these 17, 9 can be identified as published, or accepted
for publication - a publication rate of 53%.

Although the publication rate for those that received less than 300 views
is 5/15 = 33%, this difference in publication rates is, again, not
statistically significant (because of the small sample size).  So, again,
one may only conclude that the number of views might also serve a useful
impact-rating function - one that merits further attention at a later
date, when a larger sample size of early-posted preprints is available
(and especially, preprints that were posted at least a year previously).

Of course, it's possible that the 32 longest-posted preprints represent a
somewhat biased sample of the entire number of preprints that had been
posted to CPS before May 28, 2001 (a total of 226 preprints).

Please note that I'm not suggesting that the CPS preprint archive has no
flaws.  One flaw is (IMHO) that it's a preprint archive (not a true
eprint archive), in that it appears to be intended for preprints only, not
for both preprints and postprints (where, for the postprints, the authors
have retained copyright).  Although authors can, in the response section
of the CPS webpage that provides access to their preprints, post
information about the citation for the published version (a sign-post
function), this sign-post capability isn't integrated into the archive
as well as it is (for example) at the arXiv archive.

Another flaw (again, IMHO) is that it's not on the list of Open Archives
(at: http://oaisrv.nsdl.cornell.edu/Register/BrowseSites.pl).  So, the CPS
archive doesn't meet an inter-operability criterion.

In summary, the CPS archive seems to provide an interesting approach to an
impact-rating function.  It does provide authors with at least some
possibility of a sign-post function.  But, it isn't inter-operable.
(So, it can't easily become part of an (envisioned) universal eprint
archive?).

Jim Till
University of Toronto


Re: ClinMed NetPrints

2001-05-25 Thread Jim Till
On Thu, 24 May 2001, Tim Brody wrote (about my proposed 2nd criterion for
evaluation of an eprint archive, which was: 2) its suitability for
yielding citation data [an 'impact-ranking' criterion?]):

[tb] One might also add the facility to export hit data, as an
[tb] alternative criterion (or any other raw statistical data?).

What kind of raw statistical data might be most useful, in the future, for
'impact-ranking'?

At the arXiv archive, one section of the FAQ section (under Miscellaneous)
addresses the question: Why don't you release statistics about paper
retrieval?.  (See: http://xxx.lanl.gov/help/faq/statfaq).

The short answer provided is: Such 'statistics' are difficult to assess
for a variety of reasons.  The longer answer also includes the comments
that:

It could be argued perhaps correctly that statistics may provide some
useful information at least on the relative popularity of submissions,
since the distributed access and other factors may be subsumable into some
overall scale factor. But even this information is ambiguous in many
cases, and publicizing, even when accurate, could merely accentuate
faddishness in fields already excessively faddish.

And,

Most significantly, however, there is a strong philosophic reason for not
publicizing (or even saving) these statistics. When one browses in a
library it is very important (in fact legislated) that big brother is not
watching through a camera mounted on the wall; for the benefit of readers
it is very important to maintain in every way possible this sense of
freedom from monitoring in the electronic realm.

Thought-provoking comments?

Jim Till
University of Toronto


Re: ClinMed NetPrints

2001-05-24 Thread Jim Till
On Wed, 23 May 2001, Stevan Harnad wrote:

[sh] What two criteria? Certainly the archives should be interoperable
[sh] (that's what www.openarchives.org is about, and what www.eprints.org
[sh] software is for), and certainly the citation-linking and
[sh] impact-ranking should be across all the distributed corpus, just as
[sh] the harvesting is. But apart from that, the only other criteria
[sh] (apart from topic) are unrefereed/refereed and, for the latter,
[sh] the journal brand-name (just as before).

My two suggested criteria for evaluating an eprint archive (or, if you
prefer, please regard them instead as 'design  usability guidelines' for
an eprint archive) are:

1) its suitability as part of an (envisioned) universal archive
[an 'inter-operability' criterion?], and,

2) its suitability for yielding citation data
[an 'impact-ranking' criterion?].

I understand that Stevan is suggesting a third:

3) its suitability for distinguishing between reports that either have, or
have not, been peer-reviewed and/or published in a 'brand-name' journal
(either before, or after, being included in the eprint archive)
[a 'sign-posting' criterion?].

Now, I'll ask a less theoretical question:  To what extent do existing
eprint archives conform to guidelines such as these?  In the set of
existing eprint archives, I'll include not only the arXiv archive and the
CogPrints archive, but also (for example) BMJ's ClinMed NetPrints archive
and Elsevier's Chemistry Preprint Server (CPS) archive.

Other comments on these 'criteria' (or, 'guidelines') would be welcomed
(if the 'Re: e-Archiving Challenge' thread hasn't diverted attention away
from this one!).

Jim Till
University of Toronto


Re: ClinMed NetPrints

2001-05-23 Thread Jim Till
On Tue, 22 May 2001, Stevan Harnad wrote:

[sh] An eprint archive is analogous to a library, not to a journal.
[sh] (Indeed, journal articles are archived in eprint archives.)

I didn't intend to infer (if I did so) that an eprint archive is analogous
to a journal.  There are a number of models for the role of eprint
archives.  One is to regard such archives as analogous to libraries;
another is to regard them as analogous to databases.

For example, in a model proposed by Paul Ginsparg, The three layers are
the data, information, and knowledge networks--where information is taken
to mean data plus metadata (i.e. descriptive data), and knowledge
signifies information plus synthesis (i.e. additional synthesizing
information), see: http://www.biomedcentral.com/info/ginsparg-ed.asp

In this model, the arXiv eprint archive is located at the data level.

Rob Kling and Geoffrey McKim have suggested that:  Different scientific
fields have developed and use distinctly different communicative forums,
both in the paper and electronic arenas, and these forums play different
communicative roles within the field, see:
http://arxiv.org/abs/cs/9909008

That different models may be preferred by those in different fields
probably stems in large part from differences in historical experience
(see, for example, my article on Predecessors of preprint servers in
Learned Publishing 2001; 14(1): 7-13; a version in HTML is available via:
http://arXiv.org/html/physics/0102004).

About the biomedical field: the editor of Perspectives in Electronic
Publishing (Steve Hitchcock), has commented that: Biomedical researchers
have been among the most eager to exploit the features of electronic
publishing allied to freely available data services, yet at the same time
acting to protect the formal structure and discipline imposed by
journals, (see:
http://aims.ecs.soton.ac.uk/pep.nsf/0dbef9e185359a288025673f006fadfd/fa5e35e7fed5053480256716003abf31?OpenDocument)

This comment is in agreement with my own experience in this field.

[sh] In the new era of distributed, interoperable eprint archives, it
[sh] shows only what happens to appear in one arbitrary fragment of the
[sh] global virtual library into which the eprint archives are all
[sh] harvested.

Agreed.  But, the individual eprint archives must be designed to permit
harvesting of their contents in this way.  In my previous message, I
referred to Greg Kuperberg's suggestion that a main criterion in
evaluating an eprint archive should be its suitability as part of the
envisioned universal archive.  Whether or not one prefers to regard this
universal archive as a global virtual library, this criterion still
seems to me to be an appropriate one.

[jt] Another criterion (it seems to me) should be its suitability for
[jt] obtaining citation data.  An example, based on the arXiv archive, is
[jt] provided by the Cite-Base search service
[jt] (http://cite-base.ecs.soton.ac.uk/cgi-bin/search)

[sh] Correct. But cite-base is not measuring archive-impact but paper-
[sh] or author-impact. And it is measured across multiple distributed
[sh] archives.

Agreed.  But, again, the eprint archive must be designed to permit such
measurements across multiple distributed archives.  This second criterion
also seems to me still to be an appropriate one.

[sh] What's needed now is more archives, and the filling of them. The
[sh] quality measures will take care of themselves. The more papers are
[sh] up there, digitally archived, the more new measures of productivity
[sh] and impact they will inspire.

Agreed.  But, will these additional eprint archives always be designed
such that the above two criteria are met?

Are there additional criteria that should also be met - especially ones
that will help to ensure that the quality measures will take care of
themselves?

Jim Till
University of Toronto


Re: ClinMed NetPrints

2001-05-22 Thread Jim Till
Last year (on Dec 11, 2000) I posted a message (on the subject: Re ClinMed
NetPrints) about the publication rate for eprints posted at the ClinMed
NetPrints website (http://clinmed.netprints.org/home.dtl)

As of Dec. 10, 2000, I estimated that about 25% of the eprints posted at
this website had completed the entire publication process, and had
subsequently appeared in a peer-reviewed journal.

As of May 20, 2001, 45 eprints have been posted at the ClinMed NetPrints
website.  Of these, 19 were posted before the end of May, 2000 (i.e. about
a year or more ago).  Of the 19, I could identify 5 that had subsequently
been published in peer-reviewed journals (a publication rate of 26%).

I also checked, via PubMed and the ISI Citation Databases, for other
publications by the author(s) of the 45 eprints.  For only 8 of the 45 was
I unable to find any previous publications by the author (nor by any of
the co-authors, for multi-authored research reports).  So, for 37/45 of
the eprints (82%), one or more of the authors had some track record of
prior publications in journals listed in PubMed, or in the ISI Citation
Databases (in an attempt to be thorough, I included all three of these
databases, not just the ISI Science Citation Index).

Although track record is certainly not a highly reliable indicator of the
quality of a research report, neither is success in peer review
(especially in low-impact journals!).  However, I'm especially interested
in ways to assess the quality of an eprint archive as a whole, not just
the quality of individual eprints.  As in my previous message, I continue
to ask: what criteria should be used to assess the quality of an eprint
archive?

I've seen sets of criteria for use in the evaluation of eHealth websites
(see, for example, the an Information Quality Tool based on The Health
Summit Working Group's Criteria for Assessing the Quality of Health
Information on the Internet (http://hitiweb.mitretek.org/iq/default.asp),
but I haven't seen an analogous set of criteria for use in the evaluation
of eprint archives (and, especially, ones related to health research, such
as the ClinMed NetPrints archive, and The Lancet's eResearch Archive,
which, at present, contains only a few research reports; see:
http://www.thelancet.com/era/epstatus).

In a contribution to another thread (Re: Evaluation of preprint/postprint
servers, in a message posted on 15 Dec 2000), Greg Kuperberg suggested
that a main criterion in evaluating an archive should be its suitability
as part of the envisioned universal archive.

Another criterion (it seems to me) should be its suitability for obtaining
citation data.  An example, based on the arXiv archive, is provided by the
Cite-Base search service (http://cite-base.ecs.soton.ac.uk/cgi-bin/search)
which (I gather) is based on OpCit citation data, and allows users to rank
searches for reports in arXiv by citation impact or by hits (see The Open
Citation Project, http://opcit.eprints.org/).

Does anyone know of a web-accessible set of criteria for use in evaluating
eprint archives?

Jim Till
University of Toronto


Elsevier's ChemWeb Preprint Archive

2001-04-16 Thread Jim Till
I have no connection with the ChemWeb preprint server, but I continue to
watch its evolution with some interest.  One interesting (to me) feature
is that it attempts to provide some simple statistical indicators for each
article, such as: (see http://preprint.chemweb.com/):

number of views
number of responses
rank (ranked by self-selected visitors to the preprint, on a 1-5 scale)

It remains to be seen, at some time in the future, which of these
indicators (if any, or perhaps combinations of them) might best predict
the subsequent impact of these preprints (or of published papers based on
them, as assessed, for example, by citation data).

Via the Browse button, one can also access data about the numbers of
preprints that have been posted to date (last updated April 12, 2001):

Classification.Total%

Analytical Chemistry20..10
Biochemistry11...5
Chemical Engineering16...8
Environmental Chemistry.10...5
Inorganic Chemistry.23..11
Macromolecular Chemistry.6...3
Medicinal/Pharmaceutical Chemistry...6...3
Miscellaneous9...4
Organic Chemistry...22..11
Physical Chemistry..83..40

Total..206.100

It may be noteworthy that the largest number of preprints has been in the
subfield of physical chemistry.  Might this be another example (along with
the arXiv server) of physics-oriented scientists choosing to be early
adopters of preprint servers?  Or, is physical chemistry simply a very
large subfield, in comparison with other kinds of chemistry?  (I'm not a
chemist, so the answer isn't obvious to me!).

Jim Till
University of Toronto


The Lancet's eResearch Archive

2001-03-14 Thread Jim Till
On 14 Dec, 2000, I wrote [in part, on the Subject: Re: Evaluation of
preprint/postprint servers]:

[jt] Earlier this year, an eprint archive on international health was
[jt] available via Lancet's website, at:
[jt] http://www.thelancet.com/newlancet/eprint/index_body.html
[jt] This url now yields 404 Not Found.

The Lancet's 'eResearch Archive' (ERA) is now available again, via:
http://www.thelancet.com/era

The ERA home page is entitled:

 THE DAWN OF A NEW ERA

 THE LANCET Electronic Research Archive in international health and
 e-print server.

Re eprints (again, from the ERA home page):

 And e-prints? - All papers published in the journal are rigorously
 peer-reviewed, both qualitatively and statistically. But some topics
 benefit from more wide-ranging comment before publication. To ensure
 these papers receive the extra review they deserve we will post them
 on the e-print server.

 Access to ERA will be unrestricted on thelancet.com - our objective is
 to create a searchable public library of research in international
 health.

 From the 'Guidelines' page [http://www.thelancet.com/era/guidelines]:

 At the time of submission to The Lancet, authors can request that
 their paper appear as an eprint. Papers that pass the initial in-house
 editorial screen are reproduced, as submitted, on an open-access
 website with a citable reference indicating that the submission is
 unreviewed. Users of the service can read and comment on all
 submissions under consideration. These comments will be reproduced
 with the paper. Papers that appear as eprints are also formally
 peer-reviewed. The free and formal comments are used to help The
 Lancet's editors decide how to proceed with a paper. There are two
 possible endpoints for an eprint: publication (after revision if
 necessary) in print and electronic formats (the citation becomes that
 of the printed version); or rejection, in which case the eprint is
 removed from the site (a record of its passage will remain) and
 authors will be free to submit elsewhere.

One reason why this particular archive may be of some interest to members
of this forum is that it apparently involves a sequential review process:
an in-house editorial screen - open-access eprint - peer-review -
acceptance or rejection.

At the page on 'Eprint Status' [http://www.thelancet.com/era/epstatus],
three eprints are listed as 'In print' and four are listed as 'Rejected'.
But, one of the four, 'How a consumer health library can help empower
patients with information', by Aniruddha Malpani, from the Health
Education Library for People (HELP) in Bombay, India, was 'Moved to ERA
Int Health'.  The ERA Int Health webpage includes, so far, a total of only
9 articles, 6 with dates in 1999, and 3 with dates in 2000.

For those not familiar with medical journals:  The Lancet is a
well-respected general medical journal (second only to the New England
Journal of Medicine in impact factor).  Of the top five general medical
journals (ranked by impact factor), two (the British Medical Journal and
the Canadian Medical Association Journal) provide free online access to
the full text of all articles.  It will be interesting to see whether or
not the impact factors for these latter two journals will increase,
relative to the impact factors for the other three general medical
journals.  (Another of the top five general medical journal is JAMA, the
Journal of the American Medical Association, which is fully available
online only to subscribers and to members of the AMA).

Jim Till
University of Toronto


Public Library of Science Initiative

2001-02-28 Thread Jim Till
Received today (from publiclibraryofscience.org).

Jim Till
University of Toronto

-- Forwarded message --
List-Post: goal@eprints.org
List-Post: goal@eprints.org
Date: Tue, 27 Feb 2001 22:43:16 -0800
From: Public Library of Science Initiative p...@publiclibraryofscience.org
Reply-To: feedb...@publiclibraryofscience.org
Subject: Please tell your colleagues about Public Library of Science

As of February 25, more than four thousand scientists from 91
countries have joined you in signing the open letter in support of
the Public Library of Science initiative. As a result of this
initiative, several scientific publishers have already decided to
adopt the policy advocated in the open letter, and almost every
publisher and scientific society is discussing it. Yet, most life
scientists are still unaware of this initiative, and many of those
who do know of its existence have a distorted view of the proposal
and its purpose.

The breadth and depth of support for this initiative from the
scientific community will determine its success. We believe that with
your help in informing your colleagues about this effort, and
encouraging them to support it, the open letter can be published in
May with the signatures of 50,000 scientists.

To achieve this goal, we each need to reach out to at least ten of
our colleagues. We would therefore like to ask you to consider two
steps:

1. Send an email message to all the scientific colleagues in your
address book (using the text attached at the bottom of this message,
or a modified version of it, or use your own language).

2. Spend an hour or two of your time in the next week talking to
colleagues at your own and other institutions, explaining to them the
reasons that you chose to support the initiative, and encouraging
them to join you in signing the letter. (Let them know that they can
sign the letter online at: http://www.publiclibraryofscience.org).

Please also make a special effort to talk directly with the editors
and publishers of journals that are important to you, informing them
of your support of this initiative, and encouraging them to adopt the
policy that the letter advocates. We would greatly appreciate hearing
about any such efforts you are able to make.

Your time and effort can make the crucial difference in the success
of this initiative.

Sincerely,

Michael Ashburner, University of Cambridge
Patrick O. Brown, Stanford University
Mary Case, Association of Research Libraries
Michael B. Eisen, Lawrence Berkeley National Lab and UC Berkeley
Lee Hartwell, Fred Hutchinson Cancer Research Center
Marc Kirschner, Harvard University
Chaitan Khosla, Stanford University
Roel Nusse, Stanford University
Richard J. Roberts, New England Biolabs
Matthew Scott, Stanford University
Harold Varmus, Memorial Sloan-Kettering Cancer Center
Barbara Wold, Caltech


= Model email message to send to colleagues =

Dear Colleague,

We write to ask for your support of an initiative to provide
unrestricted access to the published record of scientific research.
An open letter in support of this initiative has been signed by more
than 4,500 scientists from 91 countries. We hope you will take a
minute to read the letter and consider signing it.

The open letter, a list of the scientists who have already signed it,
and some answers to frequently asked questions are posted at:
http://www.publiclibraryofscience.org. This site also provides a way
for colleagues to sign the open letter online.

You may also wish to read an editorial written by Richard J. Roberts,
recently published in PNAS, which explains why he supports the
initiative (http://www.pnas.org/cgi/content/full/041601398v1).

This is a grassroots initiative, and the breadth and depth of support
it receives from the scientific community will determine its success.
If you decide to support this effort, please consider spending an
hour or two of your time in the next week talking to colleagues at
your own and other institutions, explaining to them the reasons that
you chose to support it, and encouraging them to join you in signing
the letter. Your effort can really make a difference.

===


Re: NIH's Public Archive for the Refereed Literature: PUBMED CENTRAL

2001-02-19 Thread Jim Till
One month ago (on Fri, 19 Jan 2001), I wrote:

[jt] A recent advocacy effort about PubMed Central (found via:
[jt] http://www.publiclibraryofscience.org/index.shtml):

[jt] publiclibraryofscience.org was established to organize support
[jt] within the scientific community for online public libraries of
[jt] science, providing unrestricted free access to the archival record
[jt] of scientific research.

[jt] Scientists can express their support for this effort by signing an
[jt] open letter. 1507 scientists from 52 countries have already signed.

[remainder snipped]

4006 people from 84 countries have now signed the open letter.

Jim Till
University of Toronto


Re: A Note of Caution About Reforming the System

2001-02-17 Thread Jim Till
On Sat, 17 Feb 2001, Stevan Harnad wrote:

[sh] Here is a prediction: If researchers really did stop submitting
[sh] their findings for peer review, the quality of the literature would
[sh] decline until peer review had to be re-invented. (For the record: I
[sh] mean quasi-classical, a-priori peer review, not post-hoc peer
[sh] commentary on an unfiltered, unanswerable raw literature of
[sh] indeterminate navigability).

What if there's no consensus about a definition of the research
'literature' in the future?  Various research 'literatures' might be
defined, based mainly on which particular search engine one is using (and,
of course, on the stability and accessibility of those archives that the
search engine does detect).

An example: when I tried out the Digital Integrity search engine demo
(http://www.findsame.com/), I used as my source of key words the entire
abstract of my article, 'Predecessors of preprint servers' [Learned
Publishing 2001(January); 14(1): 7-13]; HTML version available via
http://www.arxiv.org/html/physics/0102004 and PDF version accessible via
http://www.catchword.com/alpsp/09531513/v14n1/contp1-1.htm

An article that matched a few of the key words was one authored by Julie
M. Hurd of the University of Illinois at Chicago (I believe that she's the
head of the Science Library there), entitled 'Information Technology:
Catalyst for Change in Scientific Communication'.

This article was last edited on 5 February 1998 [and was from the 1996
IATUL conference, 24-28 June 1996, 'Networks, Networking and Evaluations
for Digital Libraries']:
http://educate.lib.chalmers.se/IATUL/proceedcontents/paperirvine/hurd.htm

When I did a search of JSTOR, I wasn't able to find anything by this
author.  However, knowing that the article did exist online, I was easily
able to find it again, using the Google search engine and a few
appropriate key words.

In the article, she compares some models of scientific communication,
including more traditional ones (based on the refereed article and the
peer-reviewed journal as the basic units of distribution), and less
traditional ones (including one where the e-print is the basic unit of
distribution, and one that uses data as the basic unit of distribution).

This is (IMHO) clearly a scholarly article.  I don't know whether or not
it's (subsequently?) been peer-reviewed, but I did find it interesting.

My point here isn't to be an advocate in favor (or against) any of the
models summarized by Julie M. Hurd.  My point is simply that the model
that provides the main focus for this Forum is one based on self-archived
e-prints and peer-reviewed journals as the basic units of distribution.
It represents one model for opening up access to (the traditional
peer-reviewed) research 'literature'.  But, of course, it's not the only
model for an 'open literature' of the future.

My own perspective?  There isn't (and won't be) any 'scholarly consensus'
about the exact boundaries of the research 'literature'.

Jim Till
University of Toronto


Re: For Whom the Gate Tolls?

2001-02-11 Thread Jim Till
In his article 'For Whom the Gate Tolls?' (at:
http://www.ecs.soton.ac.uk/~harnad/Tp/resolution.htm#1.3.) Stevan
Harnad wrote: '... it is the much larger and more representative
non-give-away literature that has always been the model for copyright law
and copyright concerns. But copyright protection from theft-of-authorship
(plagiarism), which is essential for both give-away and non-give-away
authors, has nothing at all to do with copyright protection from
theft-of-text (piracy), which non-give-away authors want but give-away
authors do not want'.

Re detecting 'theft-of-authorship': there's a novel (to me) kind of search
engine (demo at: http://www.findsame.com/).  It's designed to search for
content, not keywords.  One of the examples provided at the website is one
entitled: 'Discover plagiarism in a student report'.  I know nothing about
the owner of the website (Digital Integrity, Inc.).  I found out about
this site from a message posted at another discussion forum (on a topic
unrelated to the main focus of this forum), and found it interesting.

On another topic (patents): I didn't see any mention of patents in
Stevan's article.  One concern that has been expressed to me recently
about open self-archiving of preprints (e.g. in biomedical fields relevant
to biotechnology), is that it should only to be done *after* patent
protection has been sought.

My understanding is that the United States is the only country in the
world that offers patent protection under its patent law to those who are
the 'first to invent', and that all other countries are on a 'first to
file' basis.  I assume that, on a 'first to invent' basis, open
self-archiving of preprints could be an advantage in establishing
priority.  As Stevan wrote (at:
http://www.ecs.soton.ac.uk/~harnad/Tp/resolution.htm#12.Priority):
'Establishing priority is again a matter of probability, but it can
readily be made much more definitive and reliable on-line than on-paper if
we wish'.

When the basis for patent protection is 'first to file', it does seem to
me that, if one is reporting results that could yield an 'invention', then
one should file for patent protection *before* self-archiving a preprint.
However, it seems to me that (if one has help, e.g. via the appropriate
office at one's University or Research Institute), it need not take a long
time to file for patent protection (only a few days?).

Comments about the 'first to file' scenario would be welcomed.

Jim Till
University of Toronto


Re: Information Exchange Groups (IEGs)

2001-02-06 Thread Jim Till
On Wed, 31 Jan 2001, Albert Henderson chess...@compuserve.com wrote:

[ah] For anyone who missed my point (and I apologize for not making
[ah] it ultra-clear) what is controversial, and what I find insulting
[ah] to all science editors, is Till's interpretation that makes
[ah] reference to the Star Chamber -- found in the paragraph that
[ah] precedes his conclusion.

Again, I can only suggest that those interested should read the article,
and decide for themselves how controversial they find it to be.  For
example, if you had been asked to be a reviewer or an editor for this
article, would you have demanded that the offending paragraph be omitted?

A reminder: a definition of the verb 'censor' is: 'to make deletions or
changes in'.  I'd prefer not to launch a debate about the ethics of
'censorship' (for example, 'scientific censorship' by peers, e.g. on the
basis of perceived errors in scientific methodology: OK; 'political
censorship', e.g. in the basis of a failure to exhibit political
correctness: not OK?).

However, perhaps the issue of 'censorship' *is* relevant to the central
theme of this Forum: freeing the (acceptable-quality) research literature?

BTW, an e-print (in this case, an 'e-postprint') of my article,
'Predecessors of preprint servers' [Learned Publishing 2001(January);
14(1): 7-13], is now freely available in HTML, via:
http://www.arxiv.org/html/physics/0102004

A free PDF version also continues to be available, via:
http://www.catchword.com/09531513/v14n1/contp1.htm

As I mentioned in an earlier message, I've retained copyright.

Jim Till
University of Toronto


Re: Information Exchange Groups (IEGs)

2001-01-30 Thread Jim Till
On Tue, 30 Jan 2001, Stevan Harnad wrote (on the subject:
Re: Conflating Gate-Keeping with Toll-Gating):

 On Mon, 29 Jan 2001, Albert Henderson wrote:

  James E. Till sees science editors as the main barrier to the
  circulation of free preprints. He should understand that there is
  a good reason for editors' successful opposition, one that is not
  as well recognized by the author as by the scientific world. Put
  as succinctly as possible, the reason is that editors are responsible
  for the integrity of the scientific record...

 Without endorsing James E. Till's position (I don't think Science Editors
 are a barrier, nor that the real issue is primarily preprints), one can
 immediately correct the familiar error Albert Henderson is making here:

[remainder snipped]

Whoa!  In my article ('Predecessors of preprint servers'), the concluding
paragraph of the section about IEGs (which were terminated in 1967) is:

One of the conclusions reached by Green was that it was not the failure
of the IEG experiment, but its success, that finally spelled its doom. He
commented on the costs of the experiment, and argued that the cost of his
IEG (No. 1) was miniscule, relative to the dividends received by the
members and to the total costs of their research.[ref 20] He suggested
that opposition from scientific journals had a crucial influence on the
decision to terminate the IEGs. He noted that the editors of Nature [ref
27] 'spilled the beans prematurely' about a meeting in Vienna, on 10 and
11 September 1966, of the editors of several major biochemically oriented
journals. Five of the editors voted to propose to their editorial boards
not to accept articles or other communications previously circulated
through IEGs.[refs 28,29] Also, papers could not be submitted
simultaneously to a journal and an IEG, nor could papers already accepted
for publication in a journal be released through an IEG.[ref 28] This
policy, which, in effect, banned the inclusion of preprints into the
scholarly literature, was soon adopted by several major biomedical
journals.[ref 28,29] It was probably one of three major barriers to the
further development of a 'preprint culture' in these sciences. The second
was the termination, by NIH, of support for the IEGs, in part because of
the costs that would be involved in any continuation or expansion of the
IEGs. The third was the continuing opposition, by many respected and
senior biomedical scientists, to the distribution of unrefereed papers.

So, the 'position' referred to by Albert Henderson is that of David E.
Green, who chaired IEG No. 1 (which was focused on the related fields of
electron transfer, oxidative and photosynthetic phosphorylation, ion
transport, and membrane structure and function; Green was a respected
senior contributor to these fields).  My own 'position' is that there were
three major barriers to the further development (immediately after 1967)
of a 'preprint culture' in the biomedical sciences.  These three barriers
are summarized in the last sentence of the paragraph quoted above.

The 'Concluding comment' in my article is:

Will a new revolution in scientific publishing, in which journals come to
be regarded as an overlay on preprint databases, now overtake the
biomedical sciences, following the lead of HEP?[refs 14,37] The most
prudent prediction probably is: much more quickly in some areas of
research than in others. The issues involved continue to be actively
debated, on (for example) an online forum sponsored by Sigma Xi.[ref 41].

So, this final paragraph summarizes my overall 'position' (it doesn't seem
to me to be a very controversial one!).  I can only suggest that those who
are interested in an historical perspective on the IEGs, and on the
origins of a 'preprint culture' in high-energy physics (HEP), should read
the article themselves, and not rely only on a highly-condensed
interpretation provided by someone else.

Another link to a PDF version of my article is:

http://www.catchword.com/09531513/v14n1/contp1.htm

BTW, the article is freely available, and I've retained copyright.

Jim Till
University of Toronto


Re: ALPSP Research study on academic journal authors

2001-01-30 Thread Jim Till
On Tue, 30 Jan 2001, Sally Morris wrote:

 I have been asked whether the acceptance/rejection figures  varied
 significantly by subject area, so I have delved deeper into the figures to
 analyse this.  The provisional results are interesting (bear in mind,
 though, that the samples for some subjects are very small)

 By and large, the arts and humanities journals (if I may call them that)
 appear to be far fussier than those in the sciences, with a marked skew
 towards a low percentage of acceptances.  I attach a table for those who can
 read it.

Sally, thanks for this first look at the data.  They are interesting
(interesting enough for me to spend a lunch hour looking at them!).

I've tried to convert the 'percentage responses' into actual numbers
(rather than percentages).  I've assumed that the first set of data for
'Life Science' are the correct ones (not the second).  The data for
'Medical and Veterinary Science' don't add up to the expected total
(should be 53 responses, not 44?), which accounts for the question marks
in the table below.  (I hope that the table can be read!).

For categories involving more than 10 journals:

Percent
acceptanceEM...LS...MC...MVS...SSE...Total

under 10...0.00.1?.1...2?
10-25..0.10.2?10..13?
25-50..014314?.7..38?
50-75..6.9121?.3..40?
over 750.50.6?.0..11?

Total..629444?21.104?

Because many of the cells in this table are small, an exact statistical
analysis isn't easy for an amateur statistician to perform!  However, I
combined some of the acceptance rate categories, in order to obtain
multiple 2x2 tables (0-50% acceptance vs 50-100% acceptance), and applied
Fisher's Exact Test. (I still have tables on my bookshelf for this test,
compiled by DJ Finney and colleagues: 'Tables for testing significance in
a 2x2 contingency table', Cambridge University Press, 1963!).

Because of the very small total (4 responses) for the MC category, and
the uncertainty about the correct numbers for the MVS category, I did
only two comparisons:

EM (Engineering and Materials Science) vs LS (Life Science): EM appeared
to have significantly *more* responses (than expected from the marginal
totals) in the 50-100% acceptance category (P=0.024).

SSE (Social Science and Education) vs LS (Life Science): SSE appeared to
have significantly *fewer* responses (than expected) in the 50-100%
acceptance category (0.05P0.01).

Even when one takes into account that I did more than one comparison, it
appears that, on the basis of these figures, SSE journals do have a lower
acceptance rate, and EM journals a higher acceptance rate, than LS
journals. So, these data do seem to be consistent with the results of the
Zuckerman and Merton study (referred to in previous messages).

Jim Till
University of Toronto

The remainder of Sally's original message was:

 If we look only at those samples covering more than ten journals:

 Engineering  Materials Science (12 journals, 6 responses)
 All of the responses showed between 50 and 75 percent acceptance

 Life Science (39 journals, 29 responses)
 Under 10 percent - none
 10-25 - 3 percent
 25-50 - 46 percent of journals, 48 percent of respondents
 50-75 - 33 percent of journals, 31 percent of respondents
 over 75 - 18 percent of journals, 17 percent of respondents

 Mathematics and Computing (11 journals, 4 responses)
 Under 10 percent - none
 10-25 - none
 25-50 - 64 percent of journals, 75 percent of respondents
 50-75 - 36 percent of journals, 25 percent of respondents
 over 75 - none

 Life Science (39 journals, 29 responses)
 10-25 - 3 percent
 25-50 - 46 percent of journals, 48 percent of respondents
 50-75 - 33 percent of journals, 31 percent of respondents
 over 75 - none

 Medical and Veterinary Science (66 journals, 53 responses)
 Under 10 percent - 2 percent
 10-25 - 3 percent
 25-50 - 24 percent of journals, 26 percent of respondents
 50-75 - 38 percent of journals, 40 percent of respondents
 over 75 - 15 percent of journals, 11 percent of respondents

 Social Science and Education (26 journals, 21 responses)
 Under 10 percent - 4 percent of journals, 5 percent of respondents
 10-25 - 42 percent of journals, 48 percent of respondents
 25-50 - 35 percent of journals, 33 percent of respondents
 50-75 - 19 percent of journals, 14 percent of respondents
 over 75 - none

 So insofar as these figures are representative (they cover just over 200
 journals), there does seem to be some bias towards lower average acceptance
 rates (i.e. higher rejection rates) in the arts and humanities than in the
 sciences.   What that tells us I am not sure!

 Sally


Re: ALPSP Research study on academic journal authors

2001-01-30 Thread Jim Till
On Tue, 30 Jan 2001, Jim Till wrote:

 On Tue, 30 Jan 2001, Sally Morris wrote:

  I have been asked whether the acceptance/rejection figures  varied
  significantly by subject area, so I have delved deeper into the figures to
  analyse this.  The provisional results are interesting (bear in mind,
  though, that the samples for some subjects are very small)
 
  By and large, the arts and humanities journals (if I may call them that)
  appear to be far fussier than those in the sciences, with a marked skew
  towards a low percentage of acceptances.  I attach a table for those who can
  read it.

Oops!  Initially, I didn't notice that Sally had attached a XLS file
containing a more detailed set of data!

A revised version of the table that I posted earlier today is:

For categories involving more than 10 journals:

Percent
acceptanceEM...LS...MC...MVS...SSE...Total

under 10...0.00.1..1...2
10-25..0.1014.10..25
25-50..014321..7..45
50-75..6.9111..3..30
over 750.50.6..0..11

Total..629453.21.113


Again, I combined some of the acceptance rate categories, in order to
obtain multiple 2x2 tables (0-50% acceptance vs 50-100% acceptance), and
applied Fisher's Exact Test.  I did two more comparisons:

MVS (Medical and Veterinary Science) vs LS (Life Science): No
statistically-significant difference between acceptance rates (P=0.16).

MVS (Medical and Veterinary Science) vs SSE (Social Science and
Education): No statistically-significant difference between acceptance
rates (P=0.15).

So, the conclusions in my earlier message (and in Sally's) aren't
affected:

 Even when one takes into account that I did more than one comparison, it
 appears that, on the basis of these figures, SSE journals do have a lower
 acceptance rate, and EM journals a higher acceptance rate, than LS
 journals. So, these data do seem to be consistent with the results of the
 Zuckerman and Merton study (referred to in previous messages).

Jim Till
University of Toronto


Re: ePrint Repositories [+ Peer Review]

2001-01-27 Thread Jim Till
On Fri, 26 Jan 2001, Sally Morris sec-...@alpsp.org wrote:

[sm] We recently carried out an online survey of current peer review
[sm] practice. We got 200 replies, representing many more than 200
[sm] journals (some respondents were multi-journal editors, or
[sm] publishers).  You can find the results at www.alpsp.org/pub4.htm

Interesting report (at: http://www.alpsp.org/pub4.htm)! One of the
questions in the report was:

Q7 What percentage of papers are eventually accepted for publication?

The summary of the responses to this question was:

Acceptance rates show a wide variation and the broad bands used in this
questionnaire do not provide a particularly clear picture. However, the
majority of journals represented in the survey lie in the 25-50% band. A
considerable number of respondents accept more than 50% and less than 20%
(ie 80% rejection) of articles. A few journals have acceptance rates
higher than 75% and lower than 10% (90% rejection).

A question for Sally:

Have the results for this question been cross-tabulated, to see whether or
not rejection rates are consistently higher for journals in some fields,
in comparison with journals in other fields?

I'm particularly interested to know whether or not these results are
consistent with those reported by Zuckerman and Merton in 1971, They
reported substantial variation, with rejection rates of 20 to 40 percent
in the physical sciences, and 70 to 90 percent in the social sciences and
humanities.

[See: Zuckerman HA, Merton RK. Patterns of evaluation in science:
Institutionalization, structure and functions of the referee system.
Minerva; 1971. 9:66-100].

Jim Till
University of Toronto


Re: NIH's Public Archive for the Refereed Literature: PUBMED CENTRAL

2001-01-19 Thread Jim Till
A recent advocacy effort about PubMed Central (found via:
http://www.publiclibraryofscience.org/index.shtml):

publiclibraryofscience.org was established to organize support
within the scientific community for online public libraries of
science, providing unrestricted free access to the archival record of
scientific research.

Scientists can express their support for this effort by signing an
open letter. 1507 scientists from 52 countries have already signed.
Your support will help us to persuade the publishers of scientific
journals to commit to giving their archival material to the public
domain for distribution through online public libraries.

More information about this effort, the current editorial policies of
journals, and related issues is available in our FAQ. We also
encourage you to read an editorial written by Richard Roberts and
published in PNAS describing why he thinks scientists should support
this effort.

URLs for:

Open letter: http://www.publiclibraryofscience.org/plosLetter.htm

FAQ: http://www.publiclibraryofscience.org/plosFAQ.htm

Editorial: http://www.publiclibraryofscience.org/plosRoberts.htm

The editorial is also available via the Jan. 16 issue of PNAS:

Richard J. Roberts
PubMed Central: The GenBank of the published literature
PNAS 2001 98: 381-382.

Jim Till
University of Toronto


Evaluation of preprint/postprint servers

2000-12-13 Thread Jim Till
On Mon, 11 Dec 2000, Greg Kuperberg wrote (in part; the subject line was
Re: The preprint is the postprint):

GK However, it is not quite true that the arXiv is completely unfiltered.
GK Rather, the system has the absolute minimum of filtering needed for
GK self-sustaining quality.

[snip]

GK So what kind of filtering is there?  Each category has moderators, and
GK I am one of about 30 in the mathematics section.  We have a few hours
GK to review submissions.  If we do nothing they are automatically
GK posted. We are not allowed to censor any remotely relevant submission,
GK no matter how wrong or trivial it appears to be, nor do we want to.
GK
GK However, we can reclassify a submission if it is off-topic.  We can
GK reject a submission if we can't recognize it as research at all, for
GK example if it is pornography or a non-mathematical autobiography.  We
GK can reject a submission if it has the wrong form, for example if it is
GK only an abstract or an unannotated bibliography or unannotated data.
GK And we can intervene against spam, e.g. an author who divides one
GK self-contained manuscript into 100 submissions.
GK
GK Most of the moderating decisions reclassify legitimate submissions
GK with strange classifications or excessive cross-listings.

[snip]

GK The system is similar to the conventions for informal seminars in the
GK non-electronic world.  Anyone can attend, except for people who are so
GK clueless that they would disrupt the talks.


I found this information from Greg Kuperberg to be *very* interesting.

So, should one criterion for the evaluation of the quality of preprint/
postprint servers be the existence of (as a minimum) a filtering system
analogous to the one described by Greg?

I'm assuming that, at present, the arXiv network of servers provides one
appropriate gold standard against which other preprint/postprint servers
should be compared.  I'm also assuming that preprint/postprint servers
will continue to provide one valuable way (but not the *only* way!) to
free the peer-reviewed research literature.

Jim Till
University of Toronto


Two layers of research literature

2000-10-31 Thread Jim Till
On Tue, 31 Oct 2000, J.W.T.Smith wrote [in part, on the Subject:
Re: Workshop on Open Archives Initiative in Europe]:

 Yes, I still believe there
 will be subscription services but these services will be paid for their
 skills in locating and organising relevant information for their
 subscribers not because they 'own' any of this information.

It seems to me that one needs to bear in mind two major types of research
literature: 1) the 'primary' literature (original data and/or novel
conceptual contributions); and, 2) the 'secondary' literature (layered
over the primary literature, e.g. as editorials, reviews, meta-analyses,
commentaries, etc.).  The 'secondary' literature requires skills in
locating and organizing relevant information.

I hope that the time will soon come when much of the high-quality
'primary' AND 'secondary' literature will be freely available online.

For example, many 'signpost' websites (ones that locate and organize URLs)
already exist on the web.  Some have editorial boards responsible for
monitoring their contents, and some don't.  It seems to me that at least
some of them could/should be regarded a valuable part of the 'secondary'
literature.  There are also very many that could/should be regarded only
as 'popular' or even 'vanity' literature, and some that are in the gray
area in between.  Most such 'signpost' websites are currently under a
cloud of poor prestige and/or lack of recognition (the 'clouded'
literature!) from the perspective of traditional academia.

It seems obvious to me that, as the research literature is freed from the
constraints imposed by the traditional printed journals, at least some of
these 'signpost' websites, designed to locate and organize noteworthy
online information, will make increasingly important contributions to the
'secondary' literature, as well as to the 'popular' literature intended
for non-academic readers [see also a short invited commentary, at:
http://www.cancerlynx.com/internet_contributions.html].

Jim Till
University of Toronto


Re: Effect of free access on subscription revenues

2000-10-03 Thread Jim Till
On Mon, 2 Oct 2000, David Siu wrote [in part]:

 It strikes me as being slightly irrational for a library to pay for
 what it could get for free.  Do you think a library might do this because
 being online increases reader demand for the print journal?  Are people
 finding out about articles they might want to read online and then going to
 read the print version?  If this is the case might it be attributed to
 reading habits (a preference for print) that might wane as people habituate
 themselves to reading online (even more so than they have today?)

If the printed version of the journal is available for free at one's local
university library, but only the table of contents and the abstracts are
available for free online, then the online version would be expected to
increase reader demand for the printed version.  It's a strategy that will
appeal to publishers, but not to readers.

Jim Till
University of Toronto


Re: Incentives

2000-09-16 Thread Jim Till
Earlier in September, the attached information was included in Volume 3,
Issue 25 of the ChemWeb.com News Bulletin.  Note the incentive for those
in the 'first 1000'.

Please see earlier messages in this thread about this preprint server. It
will be interesting to see how many of the preprints are subsequently
published. (Then, if I understand correctly, they will be removed from the
server and replaced by an abstract and a link to the journal article; the
full text will be behind a you-must-pay-for-it firewall).

Jim Till
Toronto, Canada


=
3 - The Chemistry Preprint Server (CPS)
=

The Chemistry Preprint Server (CPS) is growing everyday.  With 34
submissions live as of Monday 4 August, CPS is quickly becoming an
essential resource for chemists around the world.

[snip]

If your submission is one of the first 1000, you will become a
'Preprint Pioneer' and be awarded a commemorative certificate
in recognition of your contribution in making the Chemistry
Preprint Server a success and revolutionising chemistry
communication.

To submit or view go to
http://preprint.chemweb.com

[remainder snipped]

===


Re: A Role for SPARC in Freeing the Refereed Literature

2000-06-21 Thread Jim Till
If the radical (and undesirable) scenario outlined by David Goodman
(illegal free distribution) cannot be prevented, perhaps extensive stable
open-archiving of such illegally-distributed research results also can't
be prevented?

Jim Till
Joint Centre for Bioethics
University of Toronto


On Wed, 21 Jun 2000, David Goodman wrote [in response to Steavn Harnad]:

 [dg] I think most of us in this discussion fully support the efforts
 [dg] you and others are making to permit and facilitate legal free
 [dg] distribution of the results of research. But regardless of their
 [dg] sucess, the predominant mode of access may conceivably switch to
 [dg] illegal free distribution, regardless of all efforts to prevent
 [dg] it. Of course most of us -- I hope -- think this very
 [dg] undesirable, but that might not prevent it from happenning.

 Stevan Harnad wrote:

  [sh] Please see the napster thread in this Forum. My own view is
  [sh] that there is a profound DISanalogy between consumer-end
  [sh] rip-off, napster-style, of NON-give-away work (such as MP3
  [sh] music), whichis illegal and not to be condoned, and author-end
  [sh] open-archiving of give-away work (refereed research reports),
  [sh] which can be done completely legally, and is both optimal for
  [sh] research and researchers and inevitable.


Re: A Role for SPARC in Freeing the Refereed Literature

2000-06-21 Thread Jim Till
I suspect that Stevan will soon inform me that threads like this one are
beyond the scope of this particular forum, but I can imagine (very
undesirable!) scenarios where open-archiving via 'allo-piracy' might
be attempted.

Suppose, for example, that some small, somewhat underdeveloped nation (but
not so underdeveloped that it has no internet infrastructure) decided to
turn a 'blind eye' to the establishment of such an open archive.  Could
this be prevented?

--Jim Till


 On Wed, 21 Jun 2000, Jim Till wrote:

 [jt] If the radical (and undesirable) scenario outlined by David Goodman
 [jt] (illegal free distribution) cannot be prevented, perhaps extensive
 [jt] stable open-archiving of such illegally-distributed research
 [jt] results also can't be prevented?

On Wed, 21 Jun 2000, Stevan Harnad responded [in part]:

 [sh] Now what Jim seems to be suggesting above is that one could somehow
 [sh] get to open-archiving via allo-piracy. I steal YOUR product, and
 [sh] then publicly archive it for one and all. Of course that won't
 [sh] work! For the reason above. The only one who can SELF-archive his
 [sh] own work with impunity is oneSELF. So napster-style, consumer-end
 [sh] allo-piracy has nothing to whatsoever do with it; it's totally
 [sh] out of the loop.

 -
  On Wed, 21 Jun 2000, David Goodman wrote [in response to Steavn Harnad]:
 
   [dg] I think most of us in this discussion fully support the efforts
   [dg] you and others are making to permit and facilitate legal free
   [dg] distribution of the results of research. But regardless of their
   [dg] sucess, the predominant mode of access may conceivably switch to
   [dg] illegal free distribution, regardless of all efforts to prevent
   [dg] it. Of course most of us -- I hope -- think this very
   [dg] undesirable, but that might not prevent it from happenning.
  
   Stevan Harnad wrote:
 
[sh] Please see the napster thread in this Forum. My own view is
[sh] that there is a profound DISanalogy between consumer-end
[sh] rip-off, napster-style, of NON-give-away work (such as MP3
[sh] music), whichis illegal and not to be condoned, and author-end
[sh] open-archiving of give-away work (refereed research reports),
[sh] which can be done completely legally, and is both optimal for
[sh] research and researchers and inevitable.


ClinMed NetPrints

2000-04-04 Thread Jim Till
A search for eprints posted at the ClinMed NetPrints website (at
http://clinmed.netprints.org/home.dtl) yielded a total of 13 eprints posted
between December 13, 1999 and March 21, 2000.

I posted one of them.  The abstract (at:
http://clinmed.netprints.org/cgi/content/abstract/210010) is entitled:
Peer review in a post-eprints world.

In the article, I assumed that rejection rates of the kind experienced as a
result of conventional peer review shouldn't be a problem for eprints.  This
assumption appears to be incorrect for NetPrints.  If the code numbers
assigned to NetPrints are assigned sequentially, then it appears that at
least 32 eprints were submitted to the ClinMed NetPrints website between
December 13, 1999 and March 21, 2000.  Of these, only 13 have been posted.
Thus, the acceptance rate appears to be about 40%.  It seems that, so far,
the rejection rates for ClinMed NetPrints have been quite high.

It's also noteworthy that the number of 'rapid responses' to the posted
NetPrints seems to have been very low.  Not what I would have predicted.

Jim Till
Joint Centre for Bioethics
University of Toronto


Re: ClinMed NetPrints

2000-04-04 Thread Jim Till
. Scientists may
well end up publishing their paper not with a journal, but with a
publisher who maintains a database of different manuscripts.

It remains to be seen whether or not the ClinMed NetPrints website will
gain acceptance as a database of this kind.  In the meantime, perhaps
there needs to be as little blurring as possible with regard to the
criteria applied for inclusion of articles into such databases?

Jim Till
Joint Centre for Bioethics
University of Toronto


Re: Information Exchange Groups (IEGs)

2000-03-19 Thread Jim Till
On Mon, 13 Mar 2000, David Goodman wrote:

 Two key objections to IEGs at the time were: 1. Their exclusive
 nature. They were available only to a small group of laboratories.
 2. The extremely cumbersome method of distribution and inconvenient
 format of the material. ...

I've obtained a copy of David Green's article in International Science and
Technology (renamed Science  Technology after 1967) in no. 65, May 1967,
pp. 82-88.  It's a very interesting article.  For example, Green (who
chaired IEG No. 1, on electron transfer and oxidative phosphorylation)
noted that: A group chairman was selected whose essential mandate was to
ensure that every active worker in the field should become a member, and
that communication between members should be maximized.  He claimed that
the only qualification for membership was evidence that the applicant was
an active worker in the field and that Of the 725 members of my group,
329 were resident outside the U.S., with 32 different countries
represented.  So, it appears that the IEGs (or, at least, IEG No. 1)
tried *not* to be exclusive.

Green also claimed that: At least 90% of the important papers in my field
were being processed through IEG No. 1 before the group was terminated.
It appears that it wasn't the active workers in the field who were
effectively excluded, it was those who might wish to make practical use of
the contents of these important papers, for educational purposes, or for
applied and developmental work (such as in clinical medicine, see, for
example, an editorial about the IEGs, entitled 'Information exchange', in
N. Engl. J. Med. 1967; 276 no. 4: 238-9).

Another interesting quotation from Green's article: In the early days,
many believed that the IEG's [sic] would be outlets for a flood of
rubbish.  This flood never materialized [for IEG No. 1].  When
communication is to be scrutinized by 700 or more experts, only a fool
would risk presenting on [sic] inferior article or a potboiler.  The
quality of the communications was certainly no worse than the quality of
articles found in the published literature, and this despite the absence
of reviewing or editorial selection.  [It should be noted that David
Green, at that time, was co-director of the Institute for Enzyme Research
at the University of Wisconsin, and was a leader in the field of IEG No.
1].

Green did not name the executive editors of five biochemical journals
who [at a meeting of the Commission of Editors of Biochemical Journals of
the International Union of Biochemistry in Vienna on 10-11 September
1996], decided to reject the publication of any article that had been
distributed previously through IEG.

However, they are identified in a letter by W.V. Thorpe published in
Science (1967; 155 no. 3767, 10 March: 1195-6).  They were some of the
senior leaders in their fields:

J.T. Edsall (J. Biol. Chem.)
J.C. Kendrew (J. Mol. Biol.)
H. Neurath (Biochem.)
E.C. Slater (Biochim. Biophys. Acta.)
W.V. Thorpe (Biochem. J.)

A note in Nature ('Preprints made outlaws', Nature 1966; 212, 1 October:
4) about this meeting in Vienna, includes a comment that the editors of
six principal journals agreed to make recommendations to their editorial
boards that could put IEG out of business.  I've not been able to
identify the sixth editor (if indeed there was a sixth).  The two lethal
recommendations were: 1) not to accept for publication preprints
previously circulated through the IEGs, and, 2) not to allow papers
already accepted for publication to be circulated in the IEG system.

Green suggested, in his article, that: In retrospect, it was not the
failure, but rather the overwhelming success of the [IEG] experiment,
which finally spelled its doom.

A (tentative!) conclusion of my own: a majority of the IEGs were indeed
probably too successful, both from the viewpoint of the editors of major
journals (continuation of IEGs would reduce their prestige), and also from
the perspective of NIH administrators (appropriate continuation of the
IEGs, in adequate numbers, and supported by adequate reprinting
facilities, would be quite costly).

What does seem very clear is that several helpful lessons could be learned
from a thorough and well-balanced evaluation of the IEG experiment.

--Jim Till

Ontario Cancer Insitute, and
Joint Centre for Bioethics, University of Toronto
Toronto, Canada


Re: Medical journals are dead. Long live medical journals

2000-03-03 Thread Jim Till
I've tried to obtain a copy of the paper by David Green, referred to by
Steve Hitchcock.  I've been unsuccessful so far, but I did find a review by
Anne B. Piterick (Attempt to find alternatives to the scientific journal: a
brief review, The Journal of Academic Librarianship, 1989; 15(5): 269-266).
She comments that the IEG experiment was virtually killed when it appeared
to threaten formal publication. even though it was agreed to cite them
[preprints] as 'personal communications' and it was expected that formal
publication in refereed journals would follow. (In most cases, it did). Her
reference list includes a citation of the paper by Green.

I've also looked at several letters published in Science in 1966, including
a very interesting series of evaluations of IEGs in Science 1966; 154:
332-336 (21 October), and the letter by Eugene Confrey in Science 1966; 154:
843 (18 November).

Confrey (of NIH) provided two main reasons why the IEG experiment was
'discontinued': 1) the original purpose of the experiemnt has been
achieved; and, 2) the rapid growth of IEG in the last two years has now
reached the threshold limit for the NIH facilites to accomodate.  Confrey
went on to suggest some lessons that were learned from the IEG experiment,
such as: The [information exchange] group should be kept as small as
possible by the choice of scope of the phenomenon or problem encompassed,
and, The area chosen should be characterized by a high energy of scientific
enquiry.

There's no mention in the Confrey letter about concerns that the IEG
experiment might threaten formal publication.  Instead, Confrey wrote:
Under suitable control, an IEG could serve as an adjunct system to
complement existing journals and periodicals in critical areas determined by
responsible officials of a society, or an organized group of the scientific
community.

However the 'threat' is described, a key word seems to be 'control'!

--Jim Till


On Tue, 29 Feb 2000 15:22:46 +, Steve Hitchcock sh...@ecs.soton.ac.uk
wrote [in part]:

At 08:23 AM 2/29/00 -0500, Albert Henderson wrote:
Actually, the National Institutes of Health sponsored preprint
distribution in the 1960s, much like one in high energy physics
funded by the Atomic Energy Commission and run by the American
Institute of Physics. As described above, it involved paper
copies sent by mail and was not available to the general public.
The Information Exchange Groups (IEG) experiment went down in
flames amidst complaints about the deteriorating quality of its
content. See P H Abelson (SCIENCE 1966;154:727) or E A Confrey
(SCIENCE 1966;154:843) for some details.

Or see Green, Death of an experiment, International Science and Technology,
May 1967, 82-88.

I can't instantly retrieve the Science articles cited above so I'm
guessing, but I suspect Green has a different point of view.

The editors of five biomedical journals met and agreed to refuse
publication of any manuscript previously circulated via IEG. This
unaccountable decision turned out to be lethal to the IEG.