Has anyone figured out a URL platform for the new Ulrich's platform at
https://ulrichsweb.serialssolutions.com
https://ulrichsweb.serialssolutions.com/, that allows direct linking
(aka deep linking) to a particular known ISSN record?
For use, for instance, with the SFX ulrich's target?
Someone recently on this list was saying something about ways to embed
facets in for instance Atom feeds.
I was reminded of that, because checking out an Atom feed from Google
Books Data API, in Internet Explorer... Internet Explorer displays
'facet' type restrictions for it, under a heading
2:42 PM, LeVan,Ralph wrote:
-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf
Of
Jonathan Rochkind
[I agree that simply copying the Solr API for a standard like SRU is
not
the way to go -- Solr is an application that supports various
low-level
I've thought about/messed with this stuff before, and come up with no
good elegant solution. It is indeed kind of a mess.
On 2/24/2011 12:03 PM, Robert Sanderson wrote:
That is (still) incorrect.
A single schema may contain multiple namespaces, and there isn't a
unique identifier for a
Interesting, does their link resolver API do article-level links, or
just journal title level links?
I/you/one could easily write a plugin for Umlaut for their API, would be
an interesting exersize.
On 2/17/2011 1:18 AM, Markus Fischer wrote:
The cheapest and best A to Z list i know is the
No documentation in English, huh? This is a very interesting service I
had not previously been aware of, indeed quite powerful. It's free for
libraries to register their own holdings with EZB? Even American libraries?
Does the API by chance cover that registration of holdings too, so
On 2/17/2011 12:50 PM, Eric Hellman wrote:
If list members would like to name and shame GPL incompatible interfaces that
they're stuck working with, have at it. If I'm mistaken and there are none left, then I'd
like to know it.
Well, the problem with viral licenses like GPL is that other
We have Metalib and use Xerxes as a front-end to Metalib, so we just use
Xerxes as our A-Z list, or directory or databases too.
But what I'd really like to do is just _use the catalog_. If there was
a good interface for the catalog, and these resources were included in
it's search... why
Yeah, as one of the developers of Xerxes, I've been meaning to fix that
long-page problem. If any other PHP developers want to contribute a patch,
please feel free. It won't take any herculean RD to fix that feature, just
figuring out what the interface ought to look like and making it so.
A bunch of us are using Solr/lucene for discovery over library
bibliographic records, which is based on the basic tf*idf weighting type
algorithm, with a bunch of tweaks. So all of us doing that, and
finding it pretty successful, are probably surprised to hear that this
approach won't work
will -- then by all means do that. If
you can also attempt to future-proof your URL space with something like
ARKs [2], then I think it is the best of all worlds.
[1] http://www.w3.org/Provider/Style/URI
[2] https://confluence.ucop.edu/display/Curation/ARK
Peter
On Jan 26, 2011, at 6:23 PM, Jonathan
Seems like your link abstraction layer should be baked into your system,
so the URL your users see in the location bar IS the one that your link
abstraction layer is handling and you are committing to persisting.
There's no reason a URL has to begin with 'purl.org' to be part of a
persisting
Yep,using a globally unique identifier like an ARK is better than my
/records/12345 example,that's a better way to do it for sure.
So in that example,
http://digital.library.unt.edu/ark:/67531/metapth60974/ is what you
access, http://digital.library.unt.edu/ark:/67531/metapth60974/ is what
What some in this thread are frowning on is having an abstraction layer such
that the persistent URL for your web page or resource is not the URL that
typical users see in their browser location bar when viewing that resource or
web page.
If your abstraction layer can make that so, then I
On 1/18/2011 9:05 AM, Richard, Joel M wrote:
Our central wireless group has recommended that if everyone has an 802.11n card
(5Ghz radio spectrum) in their device that they will likely have a much better
experience for connectivity – it does not mean that you have to have one it
will just
I'm honestly not sure why we are fund-driving this from individuals,
when the conference is, according to it's organizers, already
sufficiently funded. So, no, I see no reason to up the goal.
On 1/13/2011 10:25 AM, Kevin S. Clarke wrote:
Great job Code4Lib!
You've collectively contributed
Might be worth finding some people from previous years; I think previous years
managed recording of presentation projection (if not live screencast), despite
presenters swapping out machines. They did it somehow.
From: Code for Libraries
/plain did *not* fix the problem in Notepad, and I do wonder what
it would take to make that program happy, but in this case it doesn't much
matter.)
Thanks for the help
Ken
-Original Message-
From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of
Jonathan Rochkind
Sent
for the help
Ken
-Original Message-
From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of
Jonathan Rochkind
Sent: Tuesday, January 11, 2011 3:41 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] data export help: line breaks on tab-delimited
download
line breaks don't
As far as I can tell, while there are several, there are none that are
actually Just Work good. It seems to be an area still in flux, people
coming up with an open source way to do that that is reliable and easy
to use and just works.
The main division in current approaches seems to be
line breaks don't appear when you view it with what software?
Can you have your browser save it to disk after it prompts you to do so,
and open with a reliable text editor you know how to use and confirm if
\n is really still in the file or not?
If you are viewing it in your web brower, then
The Journal actually is hosted on WordPress, although I'm not sure if
it's a recent enough version for the plug-in.
I had the impression looking at it before that Anthologize would only
make an 'anthology' of your entire wordpress site.
Is there any easy way to get it to, for instance, make
/11 10:45 AM, Eric Lease Morganemor...@nd.edu wrote:
On Jan 4, 2011, at 11:40 AM, Jonathan Rochkind wrote:
...Is there any easy way to get it to, for instance, make an anthology
of
all the posts with a certain WordPress tag or category instead?...
Based on my (poor) recollection of playing
Sweet, if you have an automated process that produces these reliably, or once
you do, please let us know on the Journal list, and we'll see if we can
integrate it into our regular production process and provide links to epubs
from the journal home pages.
I'm not sure, there are definitely some tricks there.
But if you do come up with some CSS that works robustly (your rough cut demo is
doing some odd things, cutting text off in the middle of paragraphs, putting
scrollbars in the middle of the page, etc), we at the journal would probably be
In my opinion, the best way to understand Work is as the set of all
expressions/manifestations that... belong to that Work. Work is sort
of a culturally constructed concept, but we know it when we see it. But
I think the WEMI heirarchy is best understood as set relationships --
while not
If you're looking for suggestions on other publishers to solicit (not
that you asked : ) ),
Manning, publisher of Erik Hatcher's Lucene in Action among other useful
titles.
Pragmatic Bookshelf, publisher of the canonical Rails books, among other
useful titles.
[ But I don't mean to add
Original Message
Subject:Code4lib 2011 Conference Registration
Date: Wed, 8 Dec 2010 11:40:53 -0500
From: mcdonald rhmcdon...@gmail.com
Reply-To: code4lib...@googlegroups.com code4lib...@googlegroups.com
To: code4libcon code4lib...@googlegroups.com
As
There is no magic way to make a 2.5M file load quickly in a browser --
let alone be actually useable once in a browser window. (What human is
going to read 40 or 50 or 100 pages all at once in a browser window?).
2.5M is just too big for a web page. You're going to have to split it up.
On
No tasteful way, no. And probably no way at all when it's on a third
party website like LexisNexis -- short of getting the user to install a
browser plugin maybe, which will require different code for every
browser, which is a lot of work to go to for a feature that I predict
will really
I would be very unlikely to use someone's homegrown library specific
scripting language.
However, if you want to make a library for an existing popular scripting
language that handles your specific domain well, I'd be quite likely to
use that if I had a problem with your domain and I was
Alexander Johannesen wrote:
Is it to throttle spam or something? 50 seems rather low, and it's
rather depressing to have a lively discussion throttled like that. Not
Pretty sure it wasn't depressing to the vast majority of the listserv
audience. That was/is a discussion that benefited
Neat, if you put this into production at a public URL anytime, do let us
know.
Elliot Hallmark wrote:
Re: simple, flexible ILS for small library
hello all,
Just wanted to mention that I did decide to code an ILS for a book
sharing library. Tweaking conventional ILS or bartering software
Emily Lynema wrote:
standardized metadata! While we had envisioned using something like
MARCXML or ISO Holdings here to express things like serial runs, there
Kind of a side note, but please consider ONIX Serial Holdings for
expressing serial runs! It is by far the best schema I've seen
Yes, it is designed to be a round-trippable expression of ordinary marc
in XML. Some reasons this is useful:
1. No maximum record length, unlike actual marc which tops out at ~10k.
2. You can use XSLT and other XML tools to work with it, and store it in
stores optimized for XML (or that only
MODS was an attempt to mostly-but-not-entirely-roundtrippably represent
data in MARC in a format that's more 'normal' XML, without packed bytes
in elements, with element names that are more or less self-documenting,
etc. It's caught on even less than MARCXML though, so if you find
MARCXML
Marc in JSON can be a nice middle-ground, faster/smaller than MarcXML
(although still probably not as binary), based on a standard low-level
data format so easier to work with using existing tools (and developers
eyes) than binary, no maximum record length.
There have been a couple competing
Tim Spalding wrote:
Does processing speed of something matter anymore? You'd have to be
doing a LOT of processing to care, wouldn't you?
Yes,which sometimes you are. Say, when you're indexing 2 or 3 or 10
million marc records into, say, solr.
Which is faster depends on what language and
I don't think that's an abuse. I consider dlf:holdings to be for
information about a holdingset, or some collection of items, while
dlf:item is for information about an individual item.
I think regardless of what you do you are being over-optimistic in
thinking that if you just do dlf, your
I believe you are correct. The ils-di stuff is just kind of a framework
starting point, not (yet) a complete end-to-end standards-constrained
solution.
I believe you will find my thoughts and experiences on this issue
helpful. My own circumstances did not involve collection-level
anything,
Is there a unique ID delivered by your LDAP that is different from the
username, and could the apps be using that unique ID to match to
accounts instead of username? Some weird alphanumeric string that is
only used internally, but when they recreated her account she got a
different one?
That continues to be an awesome resource, Marshall, thanks for
maintaining it.
Breeding, Marshall wrote:
Since it has been mentioned a couple of times in this thread, here is some
additional information about lib-web-cats and Library Technology Guides.
(http://www.librarytechnology.org)
The
Others of us have no problem with pseudonymous or anonymous
subscriptions to the listserv, I hadn't been aware that there was any
general disapproval of this, although of course everyone can have their
own opinion.
It is however, I agree, nice to develop professional relationships with
Can you give some details (or references) to justify the belief that
OAuth isn't ready yet? (The fact that Twitter implemented it poorly
does not seem apropos to me, that's just a critique of Twitter, right?).
I don't agree or disagree, just trying to take this from fud-ish rumor
to facts to
of that with an automatic
solution that will Just Work without thinking. But your arguments are
not against OAuth. Maybe they're against trying to do remote
authentication between two servers AT ALL because of the inherent
problems with such, heh.
Jonathan
MJ Ray wrote:
Jonathan Rochkind wrote:
Can you
The thing this conversation (and Twitter) is missing, is that the OAuth
protocol neither requires nor relies upon each piece of client software
having a key of any kind. Twitter wants it to, so it can disable a
certain application (distributed and used by many people) if they
decide that app
that (eg) Jonathan
Rochkind has given authorization to Software A, to access API services
that read and write to confidential information associated with Jonathan
Rochkind's account on Server B. Server B can be sure that Jonathan
Rochkind authorized Software A to do that. (Or someone that knew
This Code4Lib Journal article might be helpful:
http://journal.code4lib.org/articles/2055
Issue 8, 2009-11-23 http://journal.code4lib.org/issues/issue8
library/mobile: Tips on Designing and Developing Mobile Web Sites
Mobile applications can support learning by making library resources
And Michael Doran's own Code4Lib conference presentation is also worth a
glance, if you like (or are neutral towards) videos instead of texts.
Oops, except it looks like maybe video isn't available yet? What ever happened
to the video from the last conf? Or is it available but not linked to
Hey Owen, this sounds like it would make a pretty good Code4Lib Journal
article, if you're interested.
Sending this to the public list becuase, hey, which of YOU has something
that would make a good Code4Lib Journal article too? Please consider it.
Jonathan
Owen Stephens wrote:
I'm part of
I wonder how the field collapsing patch holds up on an index that contains 3
million documents, probably larger than your EAD-only one, but thinking about
combining EAD in an index with many many other documents (like with a library
catalog). Might be fine, might not.
(Even without field
]
Sent: Saturday, August 07, 2010 12:41 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] EAD in Blacklight (was: Re: [CODE4LIB] Batch loading in
fedora)
On Aug 6, 2010, at 8:07 PM, Jonathan Rochkind wrote:
I've been brainstorming other weird ways to do this. This one is totally
wacky
In my experience, you can't tell much about what you'd really want to
know for user needs from the indicators or subfield 3's, at least in my
catalog.
FRBR relationships probably don't work because the destination of an
arbitrary 856 is not neccesarily a FRBR entity, and even if it is
So in our marc records, we have these 856 links, the meaning of which is
basically some web page related to the entity at hand. You don't
really know the relation, the granularity is not there.
So, fine, data is data, there ought to be some way to model this in
standard XML/RDF/DC/whatever,
The argument I've tried to make to content vendors (just in casual
conversation, never in actual negotiations) is that we'll still send the
user to their platform for actually accessing the text, we just want the
metadata (possibly including textual fulltext for searching) for
_searching_. So
Blake, Miriam E wrote:
Also, thinking about the kinds of services that users want from this data, we've
found the biggest need is to focus on citation references if you can get them.
(e.g. ISI)
Kind of a different topic, but my open source Umlaut software, which can
be thought of as a link
One case study on this very topic was published in the recent Code4Lib Journal,
it may be of use:
http://journal.code4lib.org/articles/3072
From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Tom
Vanmechelen
Cory Rockliff wrote:
Do libraries opt for these commercial 'pre-indexed' services simply
because they're a good value proposition compared to all the work of
indexing multiple resources from multiple vendors into one local index,
or is it that companies like iii and Ex Libris are the only ones
the aggregated
feed as a whole.
-Ross.
On Mon, Jun 28, 2010 at 11:12 AM, Jonathan Rochkind rochk...@jhu.edu wrote:
Code4libbers, anyone want to help out debugging this? I'm kind of the
'steward' of the planet code4lib, but haven't really spent much time with it
understanding it technically
technical
stuff myself.
I think there are way newer version of the planet software that it might
be nice to migrate to.
Jonathan
Gregory McClellan wrote:
I'd be willing to volunteer.
-Greg
On Mon, Jun 28, 2010 at 12:36 PM, Jonathan Rochkind rochk...@jhu.eduwrote:
Thanks very much Ross
Jakob Voss wrote:
Is there a nice piece of code or a tutorial or example how to easily
wrap your Solr instance to get a full SRU/SRW and/or OpenSearch
interface? Converting CQL to Solr query format is just one part of a
wrapper isn't it?
You are right, this is a building block, not a
cql-ruby is a ruby gem for parsing CQL, and serializing parse trees back
to CQL, to xCQL, or to a solr query.
A new version has been released, 0.8.0, available from gem update/install.
The new version improves greatly on the #to_solr serialization as a solr
query, providing support for
Subject: Re: [CODE4LIB] WorldCat as an OpenURL endpoint ?
On Mon, Jun 14, 2010 at 3:47 PM, Jonathan Rochkind rochk...@jhu.edu wrote:
The trick here is that traditional library metadata practices make it _very
hard_ to tell if a _specific volume/issue_ is held by a given library. And
those
When I've tried to do this, it's been much harder than your story, I'm
afraid.
My library data is very inconsistent in the way it expresses it's
holdings. Even _without_ missing items, the holdings are expressed in
human-readable narrative form which is very difficult to parse reliably.
I'm not sure what you mean by complete holdings? The library holds the
entire run of the journal from the first issue printed to the
last/current? Or just holdings that dont' include missing statements?
Perhaps other institutions have more easily parseable holdings data (or
even holdings data
pieces and just need a schema and representation format, Onix Serial
Holdings is nice!
Jonathan
Kyle Banerjee wrote:
On Tue, Jun 15, 2010 at 10:13 AM, Jonathan Rochkind rochk...@jhu.eduwrote:
I'm not sure what you mean by complete holdings? The library holds the
entire run of the journal
Joe Hourcle wrote:
On Tue, 1 Jun 2010, Jonathan Rochkind wrote:
Accept-Ranges is a response header, not something that the client's
supposed to be sending.
Weird. Then can anyone explain why it's included as a request parameter
in the SRU 2.0 draft? Section 4.9.2.
Jonathan
(was: Inlining HTTP
Headers in
URLs )
On Wed, 2 Jun 2010, Jonathan Rochkind wrote:
Joe Hourcle wrote:
Accept-Ranges is a response header, not something that the
client's
supposed to be sending.
Weird. Then can anyone explain why it's included as a request
Erik Hetzner wrote:
Accept-Encoding is a little strange. It is used for gzip or deflate
compression, largely. I cannot imagine needing a link to a version
that is gzipped.
It is also hard to imagine why a link would want to specify the
charset to be used, possibly overriding a client’s
You could do this using mod_rewrite in apache alone, if you can find no
better way to do it, if your app is apache-fronted.
But it's not as obvious to me as it is to you that you ought to be
able to do everything through a URL that you can using a complete
interface to HTTP. I guess it is
Wait, but in the case you suspect is common, where you return results as
soon as the first resource is returned, and subsequent results are added
to the _end_ of the list
I'm thinking that in most of these cases, the subsequent results will be
several pages in, and the user will never
What terms do you suggest, Mike?
I think we're doomed no matter what with these, after certain
communities started to use federated search and metasearch in
directly opposite ways.
I also was told recently that what is called an accordion in English
is called a bandoneon in Spanish, and
Mike Taylor wrote:
What communities?
All I know is we here on this very list had some people _insisting_
that federated search _really_ meant aggregated index, and meta
search _really_ meant broadcast search, and other people insisting the
opposite. Both sides had citations to the
Jakub Skoczen wrote:
I wonder if someone, like Kuba, could design an 'extended async SRU' on top
of SRU, that is very SRU like, but builds on top of it to add just enough
operations for Kuba's use case area. I think that's the right way to
approach it.
Is there a particular
I'm interested in hearing more about what you're doing with your solr
index of LCSH terms. Do you have an application with documents _using_
those LCSH terms in Solr? I'm trying to figure out how to deal with an
index of LCSH terms updated from 'authorities' like id.loc.gov, in an
Yup. Buy, build, and borrow are pretty good categories.
But sometimes you _think_ you're buying, but you really end up borrowing
or even building. Other times, you can know and plan on borrowing or
building even when you buy a proprietary vendor product.
And as Ed mentions, another very
I _believe_ that the OCLC FirstSearch shibboleth server is still down,
for anyone who tries to send their users to FirstSearch via Shibboleth.
Simon Spero wrote:
At least it wasn't a totally transparent UPS test scheduled for the
Thursday of Thanksgiving weekend. My personal philosophy is
Theoretically, it sounds like Xerxes could maybe be rewritten to make
PazPar2 an alternate metasearch engine, instead of Metalib as it uses
now. The intended goal of Xerxes and PazPar2 complicate each other
nicely and would work together well.
Just another one to add to the list of cool
Here is the API response Umlaut provides to OpenURL requests with
standard scholarly formats. This API response is of course to some
extent customized to Umlaut's particular context/use cases, it was not
neccesarily intended to be any kind of standard -- certainly not with as
wide-ranging
=marcxml
titleMARCXML/title
/schema
Is this what you're looking for?
--Ray
- Original Message -
From: Jonathan Rochkind rochk...@jhu.edu
To: CODE4LIB@LISTSERV.ND.EDU
Sent: Friday, April 30, 2010 3:57 PM
Subject: [CODE4LIB] SRU/ZeeRex explain question : record schemas
This page:
http
Denenberg, Library of Congress wrote:
From: Jonathan Rochkind rochk...@jhu.edu
Another question though. I note when looking up schemaInfo... I'm a bit
confused by the sort attribute. How could you sort by a schema? What is
this attribute actually for?
Well indulge me, this is best
I'm still confused about all this stuff too, but I've often see the
oai_dc format (for OAI/PMH I think?) used as a 'standard' way to expose
simple DC attributes.
One thing I was confused about was whether the oai_dc format _required_
the use of the old style DC uri's, or also allowed the use
.
Okay in the SRU/ZeeRex explain document, how do you advertise which
version of CQL you support, 1.1 or 1.2? Or is this just implied by
which version of SRU you support, 1.1 or 1.2? How do you advertise
THAT in an SRU/ZeeRex explain?
Jonathan
Jonathan Rochkind wrote:
I think
Hmm, you could theoretically assign chars in the private unicode area to
the chars you need -- but then have your application replace those chars
by small images on rendering/display.
This seems as clean a solution as you are likely to find. Your TEI
solution still requires chars-as-images
When it's actually a reference librarian using it for reference/research tasks,
I think it can be a legitimate use case -- so long as you remember that it is
representative of only a certain type of expert searcher (not neccesarily
even every searcher requiring sophisticated or complex
This page:
http://www.loc.gov/standards/sru/resources/schemas.html
says:
The Explain document lists the XML schemas for a given database in which
records may be transferred. Every schemas is unambiguously identified by a URI
and a server may assign a short name, which may or may not be the
I agree that OpenURL is crappy.
My point was that the problem case -- 'identifying' (or describing an
element sufficient for identification, if you like to call it that)
publications that do not have standard identifiers -- is a real one.
OpenURL _does_ solve it. You _probably_ don't want
Yes, what MJ said is indeed exactly my perspective as well.
MJ Suhonos wrote:
It's not that it's cool to hate on OpenURL, but if you've really
worked with it it's easy to grow bitter.
Well, fair enough. Perhaps what I'm defending isn't OpenURL per se, but rather
the concept of being
I wouldn't count on the community using anything, just because random
people on the listserv voted on it.
If you're coding it, you should take account of the feedback, and then
go on and create something that YOU will use, and makes sense to you.
And then hope other people do too. That's
Benjamin Young wrote:
Additionally (as someone outside of the library community proper),
OpenURL's dependence on resolvers would be the largest concern.
This is a misconception. An OpenURL context object can be created to
provide structured semantic citation information, without any
Jakob Voss wrote:
I. Identifiy publication = this can *only* be done seriously with
identifiers like ISBN, DOI, OCLCNum, LCCN etc.
Ah, but for better or for worse, that's not the world we live in. We
have LOTS of publications that either lack such identifiers altogether,
or where
Jakob Voss wrote:
There are lookup services to get a standard identifier when only some
bibliographic data is known - mainly OpenURL.
A standard identifier is not always _available_ -- even if you have
access to a service to look up standard identifiers ( a not neccesarily
realistic
Has anyone actually gotten up a _server-side_ process that uses CSL to
produce formatted citations? Using the citeproc-js with a certain
custom compiled js interpreter, or anything else?
This is what I'm interested in -- I'm not concerned with making it run
in a browser, so custom compiled
Jakob Voss wrote:
Call me pedantic but if you do not have an identifier than there is no
hope to identity the publication by means of metadata. You only
*describe* it with metadata and use additional heuristics (mostly search
engines) to hopefully identify the publication based on the
Eh, just do it when you've got to do it, says me. Just let us know in
advance when you're scheduling to do it, and how long you think the
outage will be, and maybe remind us what services are effected if you
know either!
Please don't wake up at 4am for your unpaid gig for us, we've got to do
So almost all of those identifiers can be formatted as a URI. Although
sometimes it takes an info: uri, which some people don't like, but I
like, for reasons relevant to their usefulness here.
ISBN, ISSN, LCCN, and OCLCnum all have registered info: URI
sub-schemes. I once tried to figure
, it requires an
interative process of people trying to use it and seeing what they need.
I know this makes things hard from a grant-funded project management
perspective.
Jonathan
Riley, Jenn wrote:
On 4/20/10 7:18 PM, Jonathan Rochkind rochk...@jhu.edu wrote:
But first, to really
another URL, at least in the first
instance. That at least doubles the calls involved, and makes whatever
you build dependent on lots of external services that may or may not
work.
Best,
Tim
On Wed, Apr 21, 2010 at 10:45 AM, Jonathan Rochkind rochk...@jhu.edu wrote:
So almost all of those
I started preparing a longer answer to this, and still will provide one
eventually.
But first, to really answer the question, we need some more information
from you. What data do you actually have of value? Just saying we have
FRBRized data doesn't really tell me, FRBRized data can be almost
replacement for 360
Link. If anyone's already done that, I'd be keen to hear more.
regards
Dave Pattern
University of Huddersfield
From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jonathan
Rochkind [rochk...@jhu.edu]
Sent: 19 April 2010 03
301 - 400 of 752 matches
Mail list logo