Eric Hellman wrote:
May I just add here that of all the things we've talked about in
these threads, perhaps the only thing that will still be in use a
hundred years from now will be Unicode. إن شاء الله
Stuart Yeates wrote:
Sadly, yes, I agree with you on this.
Do you have any idea how
Stuart Yeates wrote:
A great deal of heat has been vented in this thread, and at least a
little light.
I'd like to invite everyone to contribute to the wikipedia page at
http://en.wikipedia.org/wiki/OpenURL in the hopes that it evolves into a
better overview of the protocol, the ecosystem
Dead ends from OpenURL enabled hyperlinks aren't a result of the standard
though, but rather an aspect of both the problem they are trying to solve,
and the conceptual way they try to do this.
I'd content these dead ends are an implementation issue - and despite this I
have to say that my
Alex,
Could you expand on how you think the problem that OpenURL tackles would
have been better approached with existing mechanisms? I'm not debating this
necessarily, but from my perspective when OpenURL was first introduced it
solved a real problem that I hadn't seen solved before.
Owen
On
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
Jakob Voss schrieb:
...
Am I right that neither OpenURL nor COinS strictly defines a metadata
model with a set of entities/attributes/fields/you-name-it and their
definition? Apparently all ContextObjects metadata formats are based on
Tim,
I'd vote for adopting the same approach as COinS on the basis it already has
some level of adoption, and we know covers at least some of the stuff
libraries and academic users (as used by both libraries and consumer tools
such as Zotero) might want to do. We are talking Books (from what
On Fri, Apr 30, 2010 at 18:47, Owen Stephens o...@ostephens.com wrote:
Could you expand on how you think the problem that OpenURL tackles would
have been better approached with existing mechanisms?
As we all know, it's pretty much a spec for a way to template incoming
and outgoing URLs,
Thanks Alex,
This makes sense, and yes I see what your saying - and yes, if you end up
going back to custom coding because it's easier it does seem to defeat the
purpose.
However I'd argue that actually OpenURL 'succeeded' because it did manage to
get some level of acceptance (ignoring the
On Fri, Apr 30, 2010 at 20:29, Owen Stephens o...@ostephens.com wrote:
However I'd argue that actually OpenURL 'succeeded' because it did manage to
get some level of acceptance (ignoring the question of whether it is v0.1 or
v1.0) - the cost of developing 'link resolvers' would have been much
Dead ends from OpenURL enabled hyperlinks aren't a result of the standard
though, but rather an aspect of both the problem they are trying to solve,
and the conceptual way they try to do this.
I'd content these dead ends are an implementation issue.
Absolutely. There is no inherent reason
Hi all,
We have an application developer job opportunity in my department at the
Penn State Libraries.
Thanks,
Janis
Database Specialist
Level: 03
Work Unit: University Libraries
Department: Department Of Information Technologies
Job Number: 32060
Penn State University Libraries is seeking
On Fri, Apr 30, 2010 at 4:09 AM, Jakob Voss jakob.v...@gbv.de wrote:
Am I right that neither OpenURL nor COinS strictly defines a metadata model
with a set of entities/attributes/fields/you-name-it and their definition?
Apparently all ContextObjects metadata formats are based on non-normative
On Fri, Apr 30, 2010 at 7:59 AM, Kyle Banerjee kyle.baner...@gmail.com wrote:
An obvious thing for a resolver to be able to do is return results in JSON
so the OpenURL can be more than a static link. But since the standard
defines no such response, the site generating the OpenURL would have to
On Fri, Apr 30, 2010 at 9:09 AM, Ross Singer rossfsin...@gmail.com wrote:
I actually think this lack of any specified response format is a large
factor in the stagnation of OpenURL as a technology. Since a resolver
is under no obligation to do anything but present a web page it's
difficult
I'm exploring options for implementing a spelling suggestion or basic query
reformulation service in our home grown search application (searches library
website, catalog, summon, and a few other bins). Right now, my thought is to
provide results for whatever was searched for 'as is' and
Hi All,
Though hesitant to jump in here, I agree with Owen that the dead ends
aren't a standards issue. The bloat of the standard is, as is the lack
of a standardized response format, but the dead ends have to do with bad
metadata being coded into open-URLs and with breakdowns in the
Eek. I was hoping for something much simpler. Do you realize that you're asking
for service taxonomy?
On Apr 30, 2010, at 10:22 AM, Ross Singer wrote:
I think the basis of a response could actually be another context
object with the 'services' entity containing a list of
services/targets
Cross-posted; apologies for duplication.
*
Dear colleagues and friends,
Please join us for:
New York Technical Services Librarians
Spring Meeting Program
Wednesday, May 19, 2010
Online registration is now open!
Hello all, apologies for cross-posting.
We have just released a PHP/symfony application for processing the
inventory of ILS item, under the new BSD license. This is great for
those of us with an ILS that can do data export, but lacks an inventory
module. Code is included for importing items
This page:
http://www.loc.gov/standards/sru/resources/schemas.html
says:
The Explain document lists the XML schemas for a given database in which
records may be transferred. Every schemas is unambiguously identified by a URI
and a server may assign a short name, which may or may not be the
schemaInfo is what you're looking for I think.
Look at http://z3950.loc.gov:7090/voyager.
Line 74, for example,
schemaInfo
schema identifier=info:srw/schema/1/marcxml-v1.1 sort=false
name=marcxml
titleMARCXML/title
/schema
Is this what you're looking for?
--Ray
- Original Message
There's a schemaInfo element right under the explain element that
carries that data.
Here's a pointer to the Explain record for my LCNAF database.
http://alcme.oclc.org/srw/search/lcnaf
Don't let the browser fool you! View the source and you'll see the
actual XML that was returned. The
I am not a fan of services that give spelling suggestions based on their own
web-wide universe of terms. It's better to suggest only terms that are
actually found within the smaller universe of your own materials. That way the
user isn't offered a link that's guaranteed to get them zero
Seconded. We use Solr's SpellCheckComponent to accomplish exactly this.
Brad Dewar
bde...@stfx.ca
-Original Message-
From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of Genny
Engel
Sent: April-30-10 6:00 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB]
Seconded. We use Solr's SpellCheckComponent to accomplish exactly this.
+1
I like chocolate milk.
26 matches
Mail list logo