[CODE4LIB] locator

2010-06-30 Thread Tom Vanmechelen
We're considering  to expand our service with a item locator. Mapping the 
library (http://mashedlibrary.com/wiki/index.php?title=Mapping_the_library) 
describes how to build this with Google maps. But is this really the way to go? 
 Does anyone has any experience with this? Does anyone have some best practices 
for this kind of project knowing that we have about 20 buildings spread all 
over the town? 

Tom

---
Tom Vanmechelen

K.U.Leuven / LIBIS
W. De Croylaan 54 bus 5592
BE-3001 Heverlee
Tel  +32 16 32 27 93


Re: [CODE4LIB] locator

2010-06-30 Thread Jonathan Rochkind
One case study on this very topic was published in the recent Code4Lib Journal, 
it may be of use:

http://journal.code4lib.org/articles/3072

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Tom 
Vanmechelen [tom.vanmeche...@libis.kuleuven.be]
Sent: Wednesday, June 30, 2010 8:24 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] locator

We're considering  to expand our service with a item locator. Mapping the 
library (http://mashedlibrary.com/wiki/index.php?title=Mapping_the_library) 
describes how to build this with Google maps. But is this really the way to go? 
 Does anyone has any experience with this? Does anyone have some best practices 
for this kind of project knowing that we have about 20 buildings spread all 
over the town?

Tom

---
Tom Vanmechelen

K.U.Leuven / LIBIS
W. De Croylaan 54 bus 5592
BE-3001 Heverlee
Tel  +32 16 32 27 93


Re: [CODE4LIB] locator

2010-06-30 Thread Owen Stephens
Hi Tom,

The mapping the library project started out (in my head) as simply using 
existing mapping tools to provide an interface to a map. The way the project 
went when we sat down and played for a day was slightly different, although 
still vaguely interesting :)

The thinking behind using Google Maps (which would apply to other 'mapping' 
interfaces - e.g. OpenLayers) was simply you get a set of tools that are 
designed to help navigation round a physical space. You can dispense with the 
geographic representation and simply use your own floorplan images. Whether 
this is the way to go probably depends on your requirements - but you would get 
functions like the ability to drop markers etc. 'for free' as it were, and also 
a well documented approach as the GMaps etc APIs come with good documentation.

However, more than once it has been suggested that this is a more complex 
approach than is required (I'm still not convinced by this - I think there are 
real strengths to this 'off the shelf' approach)

Some other bits and pieces that may be of interest:

My writeup of the day we worked on the Mapping the Library project 
http://www.meanboyfriend.com/overdue_ideas/2009/12/mashing-and-mapping/
A JISC funded project to look at producing 'item locator' service at the LSE 
http://findmylibrarybook.blogspot.com/

Owen

Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: o...@ostephens.com
Telephone: 0121 288 6936

On 30 Jun 2010, at 13:24, Tom Vanmechelen wrote:

 We're considering  to expand our service with a item locator. Mapping the 
 library (http://mashedlibrary.com/wiki/index.php?title=Mapping_the_library) 
 describes how to build this with Google maps. But is this really the way to 
 go?  Does anyone has any experience with this? Does anyone have some best 
 practices for this kind of project knowing that we have about 20 buildings 
 spread all over the town? 
 
 Tom
 
 ---
 Tom Vanmechelen
 
 K.U.Leuven / LIBIS
 W. De Croylaan 54 bus 5592
 BE-3001 Heverlee
 Tel  +32 16 32 27 93


Re: [CODE4LIB] locator

2010-06-30 Thread Keith Jenkins
Tom,

Before spending too much time trying to integrate building floorplans
with Google Maps, I would consider whether the maximum zoom level
(currently 20, which is around 3 pixels per foot) will allow you to
provide the detail needed for your floorplan.

Although this might only be an issue if you want the floorplan to
display as an overlay over the regular GMaps basemaps.

Keith

Keith Jenkins
GIS/Geospatial Applications Librarian
Mann Library, Cornell University
Ithaca, New York 14853


On Wed, Jun 30, 2010 at 8:24 AM, Tom Vanmechelen
tom.vanmeche...@libis.kuleuven.be wrote:
 We're considering  to expand our service with a item locator. Mapping the 
 library (http://mashedlibrary.com/wiki/index.php?title=Mapping_the_library) 
 describes how to build this with Google maps. But is this really the way to 
 go?  Does anyone has any experience with this? Does anyone have some best 
 practices for this kind of project knowing that we have about 20 buildings 
 spread all over the town?

 Tom

 ---
 Tom Vanmechelen

 K.U.Leuven / LIBIS
 W. De Croylaan 54 bus 5592
 BE-3001 Heverlee
 Tel  +32 16 32 27 93



Re: [CODE4LIB] code4libb...@loc

2010-06-30 Thread Stephen Little
Love the soundtrack, Eric. :-)

Stephen Little
University of Notre Dame
LinkedIn profile: http://www.linkedin.com/in/stephenmalittle


On Wed, Jun 30, 2010 at 9:34 AM, Eric Lease Morgan emor...@nd.edu wrote:

 Movie of code4libb...@loc temporarily at:

  http://infomotions.com/tmp/loc/

 Thanks guys. I had fun.

 --
 Eric Morgan



[CODE4LIB] Innovative's Synergy

2010-06-30 Thread Cindy Harper
Hi All - III is touting their web-services based Synergy product as having
the efficiency of a pre-indexed service and the timeliness of a just-in-time
service.  Does anyone know if the agreements they have made with database
vendors to use these web services preclude an organization developing an
open-source client to take advantage of those web services?  Just curious.
I suppose I should direct my question to EBSCO and Proquest directly.


Cindy Harper, Systems Librarian
Colgate University Libraries
char...@colgate.edu
315-228-7363


Re: [CODE4LIB] locator

2010-06-30 Thread Dave Caroline
I do suggest you look at your locations carefully before you dive in.

For the reserved stock held in boxes the location is in the box, the
box has its own location. Moving the box to a new shelf in another
room becomes a simple single update to the boxes location.

Some contain other items in the sleeve or pocket so a location is in
an item and its ID.

And people like to move shelves around but thats covered in the
code4lib article.

I am implementing barcodes so I can stock check and update the
locations of books on a shelf or box etc, I put the barcode on the
spines and on the loose contents of a book (as the loose contents were
in a book the shelf check will assume they are still in the book) so
its just a few seconds to check, this also sets any book that was
supposed to be there to a missing state.

Dave Caroline


Re: [CODE4LIB] Innovative's Synergy

2010-06-30 Thread Walker, David
Hi Cindy,

Both the Ebsco and Proquest APIs are definitely available to customers.  We're 
using the Ebsco one in our Xerxes application, for example.  ( I'll send you a 
link off-list, Cindy.)

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Cindy Harper 
[char...@colgate.edu]
Sent: Wednesday, June 30, 2010 9:11 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] Innovative's Synergy

Hi All - III is touting their web-services based Synergy product as having
the efficiency of a pre-indexed service and the timeliness of a just-in-time
service.  Does anyone know if the agreements they have made with database
vendors to use these web services preclude an organization developing an
open-source client to take advantage of those web services?  Just curious.
I suppose I should direct my question to EBSCO and Proquest directly.


Cindy Harper, Systems Librarian
Colgate University Libraries
char...@colgate.edu
315-228-7363


[CODE4LIB] DIY aggregate index

2010-06-30 Thread Cory Rockliff
You know, this leads into something I've been wondering about. You'll 
all have to pardon my ignorance, as I've never worked in a library with 
functioning management of e-resources.


Do libraries opt for these commercial 'pre-indexed' services simply 
because they're a good value proposition compared to all the work of 
indexing multiple resources from multiple vendors into one local index, 
or is it that companies like iii and Ex Libris are the only ones with 
enough clout to negotiate access to otherwise-unavailable database 
vendors' content?


Can I assume that if a database vendor has exposed their content to me 
as a subscriber, whether via z39.50 or a web service or whatever, that 
I'm free to cache and index all that metadata locally if I so choose? Is 
this something to be negotiated on a vendor-by-vendor basis, or is it an 
impossibility?


Cory

On 6/30/2010 12:37 PM, Walker, David wrote:

Hi Cindy,

Both the Ebsco and Proquest APIs are definitely available to customers.  We're 
using the Ebsco one in our Xerxes application, for example.  ( I'll send you a 
link off-list, Cindy.)

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Cindy Harper 
[char...@colgate.edu]
Sent: Wednesday, June 30, 2010 9:11 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] Innovative's Synergy

Hi All - III is touting their web-services based Synergy product as having
the efficiency of a pre-indexed service and the timeliness of a just-in-time
service.  Does anyone know if the agreements they have made with database
vendors to use these web services preclude an organization developing an
open-source client to take advantage of those web services?  Just curious.
I suppose I should direct my question to EBSCO and Proquest directly.


Cindy Harper, Systems Librarian
Colgate University Libraries
char...@colgate.edu
315-228-7363
---
[This E-mail scanned for viruses by Declude Virus]



   



--
Cory Rockliff
Technical Services Librarian
Bard Graduate Center: Decorative Arts, Design History, Material Culture
18 West 86th Street
New York, NY 10024
T: (212) 501-3037
rockl...@bgc.bard.edu

---
[This E-mail scanned for viruses by Declude Virus]


Re: [CODE4LIB] DIY aggregate index

2010-06-30 Thread Jonathan Rochkind

Cory Rockliff wrote:
Do libraries opt for these commercial 'pre-indexed' services simply 
because they're a good value proposition compared to all the work of 
indexing multiple resources from multiple vendors into one local index, 
or is it that companies like iii and Ex Libris are the only ones with 
enough clout to negotiate access to otherwise-unavailable database 
vendors' content?
  
A little bit of both, I think. A library probably _could_ negotiate 
access to that content... but it would be a heck of a lot of work. When 
the staff time to negotiations come in, it becomes a good value 
proposition, regardless of how much the licensing would cost you.  And 
yeah, then the staff time to actually ingest and normalize and 
troubleshoot data-flows for all that stuff on the regular basis -- I've 
heard stories of libraries that tried to do that in the early 90s and it 
was nightmarish.


So, actually, I guess i've arrived at convincing myself it's mostly 
good value proposition, in that a library probably can't afford to do 
that on their own, with or without licensing issues.


But I'd really love to see you try anyway, maybe I'm wrong. :)

Can I assume that if a database vendor has exposed their content to me 
as a subscriber, whether via z39.50 or a web service or whatever, that 
I'm free to cache and index all that metadata locally if I so choose? Is 
this something to be negotiated on a vendor-by-vendor basis, or is it an 
impossibility?
  

I doubt you can assume that.  I don't think it's an impossibility.

Jonathan


Re: [CODE4LIB] DIY aggregate index

2010-06-30 Thread Walker, David
You might also need to factor in an extra server or three (in the cloud or 
otherwise) into that equation, given that we're talking 100s of millions of 
records that will need to be indexed.

 companies like iii and Ex Libris are the only ones with
 enough clout to negotiate access

I don't think III is doing any kind of aggregated indexing, hence their 
decision to try and leverage APIs.  I could be wrong.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jonathan 
Rochkind [rochk...@jhu.edu]
Sent: Wednesday, June 30, 2010 1:15 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] DIY aggregate index

Cory Rockliff wrote:
 Do libraries opt for these commercial 'pre-indexed' services simply
 because they're a good value proposition compared to all the work of
 indexing multiple resources from multiple vendors into one local index,
 or is it that companies like iii and Ex Libris are the only ones with
 enough clout to negotiate access to otherwise-unavailable database
 vendors' content?

A little bit of both, I think. A library probably _could_ negotiate
access to that content... but it would be a heck of a lot of work. When
the staff time to negotiations come in, it becomes a good value
proposition, regardless of how much the licensing would cost you.  And
yeah, then the staff time to actually ingest and normalize and
troubleshoot data-flows for all that stuff on the regular basis -- I've
heard stories of libraries that tried to do that in the early 90s and it
was nightmarish.

So, actually, I guess i've arrived at convincing myself it's mostly
good value proposition, in that a library probably can't afford to do
that on their own, with or without licensing issues.

But I'd really love to see you try anyway, maybe I'm wrong. :)

 Can I assume that if a database vendor has exposed their content to me
 as a subscriber, whether via z39.50 or a web service or whatever, that
 I'm free to cache and index all that metadata locally if I so choose? Is
 this something to be negotiated on a vendor-by-vendor basis, or is it an
 impossibility?

I doubt you can assume that.  I don't think it's an impossibility.

Jonathan


Re: [CODE4LIB] DIY aggregate index

2010-06-30 Thread Cory Rockliff
Well, this is the thing: we're a small, highly-specialized collection, 
so I'm not talking about indexing the whole range of content which a 
university like JHU or even a small liberal arts college would need 
to--it's really a matter of a few key databases in our field(s). Don't 
get me wrong, it's still a slightly crazy idea, but I'm dissatisfied 
enough with existing solutions that I'd like to try it.


On 6/30/2010 4:15 PM, Jonathan Rochkind wrote:
A little bit of both, I think. A library probably _could_ negotiate 
access to that content... but it would be a heck of a lot of work. 
When the staff time to negotiations come in, it becomes a good value 
proposition, regardless of how much the licensing would cost you.  And 
yeah, then the staff time to actually ingest and normalize and 
troubleshoot data-flows for all that stuff on the regular basis -- 
I've heard stories of libraries that tried to do that in the early 90s 
and it was nightmarish.


I wonder if they would, in fact, demand licensing fees. I mean, we're 
already paying a subscription, and they're already exposing their 
content as a target for federated search applications (which probably do 
some caching for performance)...
So, actually, I guess i've arrived at convincing myself it's mostly 
good value proposition, in that a library probably can't afford to 
do that on their own, with or without licensing issues.

--
Cory Rockliff
Technical Services Librarian
Bard Graduate Center: Decorative Arts, Design History, Material Culture
18 West 86th Street
New York, NY 10024
T: (212) 501-3037
rockl...@bgc.bard.edu

---
[This E-mail scanned for viruses by Declude Virus]


Re: [CODE4LIB] DIY aggregate index

2010-06-30 Thread Cory Rockliff
We're looking at an infrastructure based on Marklogic running on Amazon 
EC2, so the scale of data to be indexed shouldn't actually be that big 
of an issue. Also, as I said to Jonathan, I only see myself indexing a 
handful of highly-relevant resources, so we're talking millions, rather 
than 100s of millions, of records.


On 6/30/2010 4:22 PM, Walker, David wrote:

You might also need to factor in an extra server or three (in the cloud or 
otherwise) into that equation, given that we're talking 100s of millions of 
records that will need to be indexed.

   

companies like iii and Ex Libris are the only ones with
enough clout to negotiate access
 

I don't think III is doing any kind of aggregated indexing, hence their 
decision to try and leverage APIs.  I could be wrong.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jonathan 
Rochkind [rochk...@jhu.edu]
Sent: Wednesday, June 30, 2010 1:15 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] DIY aggregate index

Cory Rockliff wrote:
   

Do libraries opt for these commercial 'pre-indexed' services simply
because they're a good value proposition compared to all the work of
indexing multiple resources from multiple vendors into one local index,
or is it that companies like iii and Ex Libris are the only ones with
enough clout to negotiate access to otherwise-unavailable database
vendors' content?

 

A little bit of both, I think. A library probably _could_ negotiate
access to that content... but it would be a heck of a lot of work. When
the staff time to negotiations come in, it becomes a good value
proposition, regardless of how much the licensing would cost you.  And
yeah, then the staff time to actually ingest and normalize and
troubleshoot data-flows for all that stuff on the regular basis -- I've
heard stories of libraries that tried to do that in the early 90s and it
was nightmarish.

So, actually, I guess i've arrived at convincing myself it's mostly
good value proposition, in that a library probably can't afford to do
that on their own, with or without licensing issues.

But I'd really love to see you try anyway, maybe I'm wrong. :)

   

Can I assume that if a database vendor has exposed their content to me
as a subscriber, whether via z39.50 or a web service or whatever, that
I'm free to cache and index all that metadata locally if I so choose? Is
this something to be negotiated on a vendor-by-vendor basis, or is it an
impossibility?

 

I doubt you can assume that.  I don't think it's an impossibility.

Jonathan
---
[This E-mail scanned for viruses by Declude Virus]



   



--
Cory Rockliff
Technical Services Librarian
Bard Graduate Center: Decorative Arts, Design History, Material Culture
18 West 86th Street
New York, NY 10024
T: (212) 501-3037
rockl...@bgc.bard.edu

---
[This E-mail scanned for viruses by Declude Virus]


Re: [CODE4LIB] DIY aggregate index

2010-06-30 Thread Blake, Miriam E
We are one of those institutions that did this -negotiated for lots of content 
YEARS ago (before the providers really knew what
they or we were in for.)

We have locally loaded records from the ISI databases, INSPEC, BIOSIS, and the 
Department of Energy (as well as from full-text
publishers, but that is another story and system entirely.)  Aside from the 
contracts, I can also attest to the major amount of
work it has been.  We have 95M bibliographic records, stored in   75TB of 
disk, and counting.  Its all running on SOLR, with a local interface
and the distributed aDORe repository on backend.   ~ 2 FTE keep it running in 
production now.

Over the 15 years we've been loading this, we've had to migrate it 3 times, and 
deal with all the dirty metadata, duplication,
and other difficult issues around scale and lack of content provider interest 
in supporting the few of us who do this kind of stuff.
We believe we have now achieved a standardized format (MPEG-21 DIDL and MARCXML 
with some other standards mixed in) and accessible
through protocol-based services (OpenURL, REST, OAI-PMH), etc. so that we hope 
we won't have to mess with the data records
again and can move on to other more interesting things.

It is nice to have, very fast - very much beats federated search -  and allows 
us (finally) to begin to build neat services (for licensed users only!)  Data 
mining?
Of course a goal, but talk about sticky areas of contract negotiation.  And in 
the end, you never have everything someone
needs when they want all content about something specific.  And yes, local 
loading is expensive, for a lot of reasons.

Ex Libris, Summon, etc. are now getting into the game from this angle.  We will 
so feel their pain, but I hope technology
and content provider engagement have improved to make it a bit easier for them! 
 And it definitely adds a level of usability
much improved over federated search.

My .02,

Miriam Blake
Los Alamos National Laboratory Research Library




On 6/30/10 3:20 PM, Rosalyn Metz rosalynm...@gmail.com wrote:

i know that there are institutions that have negotiated contracts for just
the content, sans interface.  But those that I know of have TONS of money
and are using a 3rd party interface that ingests the data for them.  I'm not
sure what the terms of that contract were or how they get the data, but it
can be done.



On Wed, Jun 30, 2010 at 5:07 PM, Cory Rockliff rockl...@bgc.bard.eduwrote:

 We're looking at an infrastructure based on Marklogic running on Amazon
 EC2, so the scale of data to be indexed shouldn't actually be that big of an
 issue. Also, as I said to Jonathan, I only see myself indexing a handful of
 highly-relevant resources, so we're talking millions, rather than 100s of
 millions, of records.


 On 6/30/2010 4:22 PM, Walker, David wrote:

 You might also need to factor in an extra server or three (in the cloud or
 otherwise) into that equation, given that we're talking 100s of millions of
 records that will need to be indexed.



 companies like iii and Ex Libris are the only ones with
 enough clout to negotiate access


 I don't think III is doing any kind of aggregated indexing, hence their
 decision to try and leverage APIs.  I could be wrong.

 --Dave

 ==
 David Walker
 Library Web Services Manager
 California State University
 http://xerxes.calstate.edu
 
 From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jonathan
 Rochkind [rochk...@jhu.edu]
 Sent: Wednesday, June 30, 2010 1:15 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] DIY aggregate index

 Cory Rockliff wrote:


 Do libraries opt for these commercial 'pre-indexed' services simply
 because they're a good value proposition compared to all the work of
 indexing multiple resources from multiple vendors into one local index,
 or is it that companies like iii and Ex Libris are the only ones with
 enough clout to negotiate access to otherwise-unavailable database
 vendors' content?



 A little bit of both, I think. A library probably _could_ negotiate
 access to that content... but it would be a heck of a lot of work. When
 the staff time to negotiations come in, it becomes a good value
 proposition, regardless of how much the licensing would cost you.  And
 yeah, then the staff time to actually ingest and normalize and
 troubleshoot data-flows for all that stuff on the regular basis -- I've
 heard stories of libraries that tried to do that in the early 90s and it
 was nightmarish.

 So, actually, I guess i've arrived at convincing myself it's mostly
 good value proposition, in that a library probably can't afford to do
 that on their own, with or without licensing issues.

 But I'd really love to see you try anyway, maybe I'm wrong. :)



 Can I assume that if a database vendor has exposed their content to me
 as a subscriber, whether via z39.50 or a web service or whatever, that
 I'm free to cache and index all that