Re: [CODE4LIB] SFX API scaling

2015-07-01 Thread Walker, David
Hi Joe,

Jonathan and a few others who have worked with the SFX API more thoroughly than 
I have may offer you better/different advice here.  But, in my (now somewhat 
dated) experience, the SFX API is much slower than the end-user interface.  

For the end-user interface, SFX only needs to show the user the options (= 
targets).  For the API, it also has to generate the final destination URL for 
each target, and that code also (perhaps inadvertently) records a hit in the 
usage statistics for each target.  So each API call generates a lot more 
database/processing overhead.

In fact, the performance has never been good enough for us to justify the kind 
of work you're undertaking here, even though we've wanted to.

--Dave

-
David Walker
Director, Systemwide Digital Library Services
California State University
562-355-4845


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Joe 
Ferrie
Sent: Tuesday, June 30, 2015 10:05 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] SFX API scaling

Hi all,

The CDL has been working on a front end for the SFX API (on the Umlaut model), 
and are wondering whether we can expect SFX performance through the API to have 
the same profile as performance through the user interface. We're specifically 
wondering whether the tolerance for concurrency would be the same. We are 
asking Ex Libris about this, but we thought those with practical experience 
might have some thoughts, or maybe someone has even done comparative 
benchmarking. Any thoughts?

Joe Ferrie
Application Programmer
California Digital Library
University of California
Office of the President
415 20th Street, 4th Floor
Oakland, CA 94612-2901


Re: [CODE4LIB] Tool for feedback on document

2013-10-21 Thread Walker, David
Just wanted to thank everyone for this feedback!

I'm leaning toward using digress.it.

--Dave

-
David Walker
Director, Systemwide Digital Library Services
California State University
562-355-4845


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
McCanna, Terran
Sent: Wednesday, October 16, 2013 11:34 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Tool for feedback on document

I've used http://a.nnotate.com/ for this several times. You can leave comments 
in line with the text, respond to other comments, display/print the comments in 
different ways, and one of my favorite things is that the people you send the 
link to don't have to create an account. 


Terran McCanna
PINES Program Manager
Georgia Public Library Service
1800 Century Place, Suite 150
Atlanta, GA 30345
404-235-7138
tmcca...@georgialibraries.org 

- Original Message -
From: "Ken Varnum" 
To: CODE4LIB@LISTSERV.ND.EDU
Sent: Wednesday, October 16, 2013 2:23:51 PM
Subject: Re: [CODE4LIB] Tool for feedback on document

Commentpress and digress.it are two Wordpress variants that offer 
paragraph-by-paragraph threaded commenting. Commentpress is quite old (we used 
it here: http://www.lib.umich.edu/islamic/ in a collaborative cataloging 
project sponsored by CLIR and funded by Mellon).


--
Ken Varnum | Web Systems Manager | MLibrary - University of Michigan - Ann 
Arbor var...@umich.edu | @varnum | http://www.lib.umich.edu/users/varnum |
734-615-3287


On Wed, Oct 16, 2013 at 2:12 PM, Michael J. Giarlo < 
leftw...@alumni.rutgers.edu> wrote:

> Hi David,
>
> Google Drive (née Docs) will allow you to share your document with 
> other users so that they can view and comment (and not edit), FWIW.  
> There may be more elegant solutions that allow, say, nested/threaded 
> comments.  I know there is blog software out there that does this, but 
> it's been a few years so I forget what it's called.
>
> -Mike
> 
>
>
> On Wed, Oct 16, 2013 at 11:06 AM, Walker, David  >wrote:
>
> > Hi all,
> >
> > We're looking to put together a large policy document, and would 
> > like to be able to solicit feedback on the text from librarians and 
> > staff across two dozen institutions.
> >
> > We could just do that via email, of course.  But I thought it might 
> > be better to have something web-based.  A wiki is not the best 
> > solution
> here,
> > as I don't want those providing feedback to be able to change the 
> > text itself, but rather just leave comments.
> >
> > My fall back plan is to just use Wordpress, breaking the document up 
> > into various pages or posts, which people can then comment on.  But 
> > it seems
> to
> > me there must be a better solutions here -- maybe one where people 
> > can leave comments in line with the text?
> >
> > Any suggestions?
> >
> > Thanks,
> >
> > --Dave
> >
> > -
> > David Walker
> > Director, Systemwide Digital Library Services California State 
> > University
> > 562-355-4845
> >
>


[CODE4LIB] Tool for feedback on document

2013-10-16 Thread Walker, David
Hi all,

We're looking to put together a large policy document, and would like to be 
able to solicit feedback on the text from librarians and staff across two dozen 
institutions.

We could just do that via email, of course.  But I thought it might be better 
to have something web-based.  A wiki is not the best solution here, as I don't 
want those providing feedback to be able to change the text itself, but rather 
just leave comments.

My fall back plan is to just use Wordpress, breaking the document up into 
various pages or posts, which people can then comment on.  But it seems to me 
there must be a better solutions here -- maybe one where people can leave 
comments in line with the text?

Any suggestions?

Thanks,

--Dave 

-
David Walker
Director, Systemwide Digital Library Services
California State University
562-355-4845


Re: [CODE4LIB] PHP HTTP Client preference

2013-09-03 Thread Walker, David
We're also using Guzzle, and really like it.

--Dave

-
David Walker
Director, Systemwide Digital Library Services
California State University
562-355-4845


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Karen 
Coombs
Sent: Tuesday, September 03, 2013 3:52 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] PHP HTTP Client preference

Thanks so much for all the feedback guys. Keep it coming. I'll definitely check 
out Guzzle as an option.

Karen

On Tue, Sep 3, 2013 at 4:26 PM, Hagedon, Mike  
wrote:
> Guzzle++
>
> -Original Message-
> From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
> Of Kevin S. Clarke
> Sent: Tuesday, September 03, 2013 8:37 AM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] PHP HTTP Client preference
>
> Another +1 for Guzzle
>
> Kevin
>
>
>
> On Tue, Sep 3, 2013 at 11:32 AM, Kevin Reiss  wrote:
>
>> I can second Guzzle. We have been using it for our our in-house PHP 
>> applications that require HTTP interactions for about six months and 
>> it has worked out very well. Guzzle has also been incorporated as the 
>> new default HTTP client in the next version of Drupal.
>>
>>
>> 
>>  From: Ross Singer 
>> To: CODE4LIB@LISTSERV.ND.EDU
>> Sent: Tuesday, September 3, 2013 10:59 AM
>> Subject: Re: [CODE4LIB] PHP HTTP Client preference
>>
>>
>> Hey Karen,
>>
>> We use Guzzle: http://guzzlephp.org/
>>
>> It's nice, seems to work well for our needs, is available in 
>> packagist, and is the HTTP client library in the official AWS SDK 
>> libraries (which was a big endorsement, in our view).
>>
>> We're still in the process of moving all of our clients over to it 
>> (we built a homegrown HTTP client on top of CURL first), but have 
>> been really impressed with it so far.
>>
>> -Ross.
>>
>> On Sep 3, 2013, at 10:49 AM, "Coombs,Karen"  wrote:
>>
>> > One project I'm working on for OCLC right now is building a set of
>> object-oriented client libraries in PHP that will assist developers 
>> with interacting with our web services. The first of these libraries 
>> we'd like to release provides classes for authentication and 
>> authorization to our web services. You can read more about 
>> Authentication/Authorization and our web services on the Developer 
>> Network site
>> >
>> > The purpose of this project is to make a simple and easy to use 
>> > object
>> oriented library that supports our various authentication methods.
>> >
>> > This library need to make HTTP requests and I've looked at a number 
>> > of
>> potential libraries and HTTP clients in PHP.
>> >
>> > Why am I not just considering using CURL natively?
>> >
>> > The standard CURL functions in PHP are not object-oriented. All of 
>> > our
>> code libraries (both our authentication/authorization library and 
>> future libraries for interacting with the REST services themselves) 
>> need to perform a robust set of HTTP interactions. Using the standard 
>> CURL functions would very likely increase the size of the code 
>> libraries and the potential for errors and inconsistencies within the 
>> code base because of how much we use HTTP.
>> >
>> > Given this, I believe there are three possible options and would 
>> > like to
>> get the community's feedback on which option you would prefer.
>> >
>> > Option 1. - Write my own HTTP Client on top of the standard PHP 
>> > CURL
>> implementation. This means people using the code library can only 
>> download it and now worry about any dependencies. However, that means 
>> adding extra code to our library which, although essential, isn't at 
>> the core of what we're trying to support. My fear is that my client 
>> will never be as good as an existing client.
>> >
>> > Option 2. - Use HTTPful code library (http://phphttpclient.com/).
>> > This
>> is a well developed and supported code base which is designed 
>> specifically to support REST interactions. It is easy to install via 
>> Composer or Phar, or manually. It is slim and trim and only does the HTTP 
>> Client functions.
>> It does create a dependency on an external (but small) library.
>> >
>> > Option 3. - Use the Zend 2 HTTPClient. This is a well developed and
>> supported code base. The biggest downside is that Zend is a massive 
>> code library to require. A developer could choose to download only 
>> the specific set of classes that we are dependent on, but asking 
>> people to do this may prove confusing to some developers.
>> >
>> > I'd appreciate your feedback so we can provide the most useful set 
>> > of
>> libraries to the community.
>> >
>> > Karen
>> >
>> > Karen A. Coombs
>> > Senior Product Analyst
>> > WorldShare Platform
>> > coom...@oclc.org
>> > 614-764-4068
>> > Skype: librarywebchic
>>


Re: [CODE4LIB] discovery layers question

2013-05-08 Thread Walker, David
I would point out, too, that Encore (the example given) is not one of the major 
discovery systems.  The 'Big Four' discovery services, as they are often 
called, are those developed by OCLC, Ex Libris, Serials Solutions, and Ebsco.

Encore is really a different kind of system.  It has no aggregated article 
index of its own.  Rather, the system is designed to integrate your local 
catalog results with article results from an external service, either via 
Innovative's own federated search system or, more recently, from Ebsco 
Discovery.

In that way, Encore is a lot more like VUFind, in fact, than the Big Four 
discovery services, in so far as you can integrate article results from a 
discovery service (currently Summon and Primo Central) into VUFind as well.

To my mind, then, there's little reason to run both VUFind and Encore 
(specifically).  Damien, I think, provided a good case for why you might want 
to run VUFind in conjunction with a true discovery service: that is, to have 
greater control over the local catalog results and the interface.  Other 
institutions are doing similar things with Blacklight or their own systems.

--Dave
-
David Walker
Director, Systemwide Digital Library Services
California State University
562-355-4845


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
Jonathan Rochkind
Sent: Monday, May 06, 2013 10:47 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] discovery layers question

I think you first need to be clear about what you would be trying to do by 
using a hosted discovery product simultaneously with VuFind. What would be the 
goals, why would you be doing this, what are you trying to accomplish?  Would 
you be offering both Encore and a VuFind implementation as alternate services 
for your users to use? Or would they be combined somehow? How would you want to 
combine them?

You need to be clear on this internally, on what you're trying to do, to have 
any hope of success.  Being clear about that when you ask a question on the 
list will also elicit more useful answers; I'm not really entirely sure what 
you're asking.

On 5/6/2013 1:39 PM, Donna Campbell wrote:
> Dear Colleagues,
>
> Is anyone in using VuFind as well as one of the major webscale 
> discovery layers (e.g., Encore)? If so, what complications do you encounter?
>
> Cordially,
>
> Donna R. Campbell
> Technical Services & Systems Librarian
> (215) 935-3872 (phone)
> (267) 295-3641 (fax)
> Mailing Address (via USPS):
> Westminster Theological Seminary Library P.O. Box 27009 Philadelphia, 
> PA 19118  USA Shipping Address (via UPS or FedEx):
> Westminster Theological Seminary Library
> 2960 W. Church Rd.
> Glenside, PA 19038  USA
>
>


Re: [CODE4LIB] Video from the Conference

2013-02-15 Thread Walker, David
Ditto.  It was almost like being there.  I even had a beer each night.

--Dave
-
David Walker
Director, Systemwide Digital Library Services
California State University
562-355-4845


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Peter 
Schlumpf
Sent: Friday, February 15, 2013 3:20 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Video from the Conference

Thanks Francis and everyone else who made the conference available via 
streaming video for those of us who could not attend.  It was great!

Peter


-Original Message-
>From: Francis Kayiwa 
>Sent: Feb 15, 2013 4:56 PM
>To: CODE4LIB@LISTSERV.ND.EDU
>Subject: [CODE4LIB] Video from the Conference
>
>In order to keep myself honest and not use up Tara Robertson's 
>generosity. I will be uploading the files to my YouTube account as they 
>become available. Since the Lightning Talks work better with the 
>YouTube
>15 minute limit cap they will go up first.
>
>http://www.youtube.com/watch?v=LRVYmdXJ8OQ
>
>Cheers,
>./fxk
>--
>Don't hate yourself in the morning -- sleep till noon.


Re: [CODE4LIB] U of Baltimore, Final Usability Report, link resolvers -- MIA?

2012-09-06 Thread Walker, David
I've always preferred search engine-based spell checkers over other approaches. 
 I've not seen a library application using a different strategy (dictionary or 
corpus based) that does nearly as well.

We've used the Yahoo [1] and Bing [2] spell check APIs for years now in our 
applications.  They used to be free (Microsoft just ended that last month).  
But even now they are very reasonably priced (e.g., Yahoo charges $0.10 per 
1,000 queries), and well worth it in my experience.  

The only drawback is that they will suggest corrections that can result in zero 
hits in your application, especially if you are using it for a small collection 
like a local catalog.  You can mitigate that by doing a quick pre-check for 
hits before showing the suggestion to users.

--Dave

[1] http://developer.yahoo.com/search/boss/
[2] http://www.bing.com/developers/

-
David Walker
Interim Director, Systemwide Digital Library Services
California State University
562-355-4845


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
Jonathan Rochkind
Sent: Thursday, September 06, 2012 6:45 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] U of Baltimore, Final Usability Report, link resolvers 
-- MIA?

Solr has a feature to make spelling suggestions based on the actual terms in 
the corpus... but it's hardly a panacea.  A straightforward naive 
implementation of the Solr feature, on top of a large library catalog corpus, 
in many of our experiences still gives odd and unuseful suggestions (including 
sometimes suggesting typos from the corpus, or suggesting taking an already 
'correct' word and suggesting a different entirely different but 
lexicographically similar word as a 'correction').   And then there's figuring 
out the right UI (and managing to make it work on top of the Solr feature) for 
multi-term querries where each independent part may or may not have a 
'correction'. 

Turns out spell suggestions is kind of hard. And it's kind of amazing that 
google does it so well (and they use some fairly complex techniques to do so, I 
think, based on a whole bunch of data and metadata they have including past 
searches and clickthroughs, not just the corpus).   

From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Ross Singer 
[rossfsin...@gmail.com]
Sent: Thursday, September 06, 2012 9:37 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] U of Baltimore, Final Usability Report, link resolvers 
-- MIA?

On Thu, Sep 6, 2012 at 9:06 AM, Cindy Harper  wrote:
> I was going to comment that some of the Encore shortcomings mentioned 
> in the PDf do seem to be addressed in current Encore versions, 
> although some of these issues have to be addressed - for instance, 
> there is a spell-check, but it can give some surprising suggestions, 
> though suggestions do clue the user in to the fact that they might 
> have a misspelling/typo.

I wrote about the woeful state of "spelling suggestions" a couple of years ago 
(among a lot of other things):

http://www.inthelibrarywiththeleadpipe.org/2009/were-gonna-geek-this-mother-out/

(you can skip on down to the "In the Absence of Suggestion, There is Always 
Search..." - it's pretty TL;DR-worthy)

Basically, the crux of it is, as long as spelling suggestions are based on 
standard dictionaries and not built /on the actual terms and phrases in the 
collection/ it's going to basically be a worthless feature.

I do note there, though, that BiblioCommons apparently must build their 
dictionaries on the metadata in the system.

-Ross.

>
> III's reaction to studies that report that users ignore the right-side 
> panel of search options was to provide a skin that has only two 
> columns - the facets on the left, and the search results on the 
> middle-to-right.
> This pushes important facets like the tag cloud very far down the 
> page, and causes a lot of scrolling, so I don't like this skin much.
>
> I recently asked a question on the encore users' list about how the 
> tag cloud could be improved - currently it suggests the most common 
> subfield a of the subject headings.  I would think it should include 
> the general, chronological, geographical subdivisions - subfields 
> x,y,z.  For instance, it doesn't provide good suggestions for improving the 
> search "civil war"
> without these. A chronological subdivision would help a lot there.  
> But then again, I haven't seen a prototype of how many relevant 
> subdivisions this would result in - would the subdivisions drown out 
> the main headings in the tag cloud?
>
> Cindy Harper, Systems Librarian
> Colgate University Libraries
> char...@colgate.edu
> 315-228-7363
>
>
>
> On Wed, Sep 5, 2012 at 5:30 PM, Jonathan LeBreton wrote:
>
>> Lucy Holman, Director of the U Baltimore Library, and a former 
>> colleague of mine at UMBC,  got back to me about this.  Her reply puts this
>> particular document into contex

Re: [CODE4LIB] Leader in MarcXML Files ( Record Length )

2012-06-29 Thread Walker, David
Whatever you do, downstream applications are probably just going to ignore that 
information anyway.  I've never bothered to look at the leader length when 
parsing MARC-XML, anyway.  

I would just make it zeros.

--Dave

-
David Walker
Interim Director, Systemwide Digital Library Services
California State University
562-355-4845


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
Sullivan, Mark V
Sent: Friday, June 29, 2012 6:52 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] Leader in MarcXML Files ( Record Length )

All,



I received a question regarding a software library I have created and released 
as open source.  The record length in the leader ( positions 0-4 ) was not 
being calculated correctly when writing as MarcXML.  However, this raises a 
more philosophical and larger question.  What is the point of the first five 
digits of the leader, outside of a ISO2709 / MARC21 encoded record?   Should I 
calculate the record length AS IF it would be encoded in ISO2709? This would be 
computationally non-trivial and would likely double the time necessary for my 
software to write a MarcXML file. Should I just make the first five digits of 
the leader '0', since it means nothing in the context of a MarcXML file?



Has anyone else pondered this question or have any input on how current systems 
work?



Keep in mind I could be writing a MarcXML record for a record created or 
modified in memory, so just using a pre-existing record length is not an option.



Many thanks for your consideration.


Mark V Sullivan
Digital Development and Web Coordinator
Technology and Support Services
University of Florida Libraries
352-273-2907 (office)
352-682-9692 (mobile)
mars...@uflib.ufl.edu


Re: [CODE4LIB] Best way to process large XML files

2012-06-08 Thread Walker, David
Since you mentioned SimpleXML, Kyle, I assume you're using PHP?

If so, you might look at XMLReader [1], which is a pull parser, and should give 
you better performance on large files than SimpleXML .  

It is still based on libxml, though, so if that is still not fast enough for 
you, you can toss out my suggestion. :-)

--Dave

[1] http://php.net/manual/en/book.xmlreader.php

-
David Walker
Interim Director, Systemwide Digital Library Services
California State University
562-355-4845


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Kyle 
Banerjee
Sent: Friday, June 08, 2012 11:36 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] Best way to process large XML files

I'm working on a script that needs to be able to crosswalk at least a couple 
hundred XML files regularly, some of which are quite large.

I've thought of a number of ways to go about this, but I wanted to bounce this 
off the list since I'm sure people here deal with this problem all the time. My 
goal is to make something that's easy to read/maintain without pegging the CPU 
and consuming too much memory.

The performance and load I'm seeing from running the files through LibXML and 
SimpleXML on the large files is completely unacceptable. SAX is not out of the 
question, but I'm trying to avoid it if possible to keep the code more compact 
and easier to read.


I'm tempted to streamedit out all line breaks since they occur in unpredictable 
places and put new ones at the end of each record into a temp file. Then I can 
read the temp file one line at a time and process using SimpleXML. That way, 
there's no need to load giant files into memory, create huge arrays, etc and 
the code would be easy enough for a 6th grader to follow. My proposed method 
doesn't sound very efficient to me, but it should consume predictable resources 
which don't increase with file size.

How do you guys deal with large XML files? Thanks,

kyle

Why the heck does the XML spec require a root element, particularly since 
large files usually consist of a large number of records/documents? This makes 
it absolutely impossible to process a file of any size without resorting to SAX 
or string parsing -- which takes away many of the advantages you'd normally 
have with an XML structure. 

--
--
Kyle Banerjee
Digital Services Program Manager
Orbis Cascade Alliance
baner...@orbiscascade.org / 503.999.9787


Re: [CODE4LIB] Proquest dissertation XML?

2012-05-10 Thread Walker, David
We use this to transform the PQ XML into the format that DSpace uses for batch 
loading -- the elements here are qualified Dublin Core, but the format is 
unique to DSpace.

   http://library.calstate.edu/media/txt/diss-to-dc.xsl

 --Dave

-
David Walker
Interim Director, Systemwide Digital Library Services
California State University
562-355-4845

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Reese, 
Terry
Sent: Thursday, May 10, 2012 8:19 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Proquest dissertation XML?

I actually wrote a simple one for someone else and include it in MarcEdit, or, 
for download to MarcEdit from the xslt registry the program uses (wish I would 
have been paying attention realizing someone else did this work) -- but I've 
attached.  This is fairly simplistic, but does the dissertation xml to marcxml.

--tr



-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Nick 
Ruest
Sent: Thursday, May 10, 2012 8:14 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Proquest dissertation XML?

Hi Michele,

This might be a helpful start: http://journal.code4lib.org/articles/1647

-nruest

On 12-05-10 11:11 AM, Michele R Combs wrote:
> Hi all --
>
> Has anyone written an XSL style sheet (or other script) to transform 
> ProQuest's dissertation metadata XML into (a) Dublin Core or (b) MARCXML?
>
> Thanks
>
> Michele
>
> +++
> Michele Combs
> Lead Archivist
> Special Collections Research Center
> Syracuse University
> 315-443-2081
> mrrot...@syr.edu
> scrc.syr.edu
> library-blog.syr.edu/scrc


Re: [CODE4LIB] PHP SUSHI client

2012-02-28 Thread Walker, David
Hi Joshua,

What do you see if you do:

  var_dump($client->__getFunctions());

That should show you the available methods and their parameters.

--Dave

-
David Walker
Library Web Services Manager
California State University


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Joshua 
Welker
Sent: Tuesday, February 28, 2012 6:00 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] PHP SUSHI client

Hi everyone, first post to this listserv.

I just started working on a SUSHI harvester client as an addition to a data 
management program I've built primarily with PHP/MySQL. There are a few 
projects I've seen listed by the NISO SUSHI organization for doing this, but 
they are all in Java or .NET. I am not very familiar with those languages and 
want my app to stay in the realm of PHP for continuity sake, so the 
roll-your-own approach is the only way to go.

I am having trouble understanding how WSDL and SOAP object methods interact 
with my code. I know from reading the SUSHI protocol standard that there is a 
method called GetReport that is used to retrieve the report. What I am having a 
hard time understanding is how I can pass data into the GetReport method.

For example, I have the following code in a class:

$client = new SoapClient($this->sushiURL); $this->response = 
$client->GetReport(???);

Where the ??? is, I need to be passing in the basic parameters of a SUSHI 
request: customer ID, requestor ID, date range, etc. As an alternative, I guess 
I could do it "longhand" and pass in an entire XML file, but I'd like to learn 
the standard way using the WSDL method as specified in the protocol definition.

This is somewhat unrelated, but is it possible to limit the COUNTER data 
returned to a particular database? Most vendors such as EBSCO allow you to 
limit COUNTER reports to a particular database in their admin modules, but I 
don't see any way in the SUSHI standard to specify one.

If anyone with experience rolling a SUSHI client could give me some pointers 
here, I'd greatly appreciate it.


Josh Welker
Electronic/Media Services Librarian
College Liaison
University Libraries
Southwest Baptist University
417.328.1624


Re: [CODE4LIB] Experience with codeIgniter?

2011-12-14 Thread Walker, David
It seems to me that WordPress would be good for the "simple and lightweight" 
part of their website.  It would allow them to easily create, delete, and 
update pages for the site.  Plus, if they have press releases or other types of 
newsy content, WordPress is a second to none for blogging.  But you could say 
much the same for any CMS, really. 

The real trick here, it seems, is what to do with this database they have.  

In order to "manage" that, you really do need to create some kind of 
specialized application.  Drupal, and some of the other CMS's, have tools for 
creating those kinds of applications.  But, depending on what this database 
actually consists of, in some respects, it can be easier just to build 
something from scratch.  And if the consultant is going to do that for this 
organization using CodeIgniter (or any other programming framework), then that 
certainly makes sense.

--Dave

-
David Walker
Library Web Services Manager
California State University


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Karen 
Coyle
Sent: Wednesday, December 14, 2011 6:54 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Experience with codeIgniter?

Thanks, Dave and Mark -- this is exactly what I needed to hear. The "folks" are 
one of those extremely poor non-profits with almost no staff and zero technical 
skills. A consulting company is pushing them in this direction saying that 
Drupal is buggy and WordPress is ...  
well, I don't know. Dang! I hate being in the middle of this. I still think 
they'd be better off going with one of the "known" CMS packages.

kc

Quoting "Walker, David" :

> Are your 'folks' looking for a content management system, Karen?
>
> As Mark just mentioned, CodeIgniter is a web application development 
> framework -- that is, a set of reusable programming code that makes it 
> easier for programmers to build applications for the web.  The key 
> terms there being "programmers" and "build."
>
> That is a very different kind of thing from  Drupal or WordPress, 
> which are systems (that have already been built) to manage content for 
> a website.  You don't have to be a programmer to use either of those.
>
> --Dave
> -
> David Walker
> Library Web Services Manager
> California State University
>
>
> -Original Message-
> From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
> Of Mark Jordan
> Sent: Wednesday, December 14, 2011 6:08 AM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] Experience with codeIgniter?
>
> Karen,
>
> I used CI for a project last summer, and thought it was easy to learn 
> if you had done some PHP programming before and were familiar with MVC 
> architecture, well documented, and had a fairly rich feature set. 
> However, my impression is that it had a very small plugin/module 
> ecosystem compared to Drupal or Wordpress. Before recommending it, you 
> should review the categories under 'Contributions' at 
> http://codeigniter.com/wiki to see if you can identify any glaring 
> holes. But, overall, I'd say it's a pretty good PHP MVC framework (not 
> that I've compared a lot of them).
>
> Mark
>
> Mark Jordan
> Head of Library Systems
> W.A.C. Bennett Library, Simon Fraser University Burnaby, British 
> Columbia, V5A 1S6, Canada
> Voice: 778.782.5753 / Fax: 778.782.3023 / Skype: mark.jordan50 
> mjor...@sfu.ca
>
> - Original Message -
>> I'm helping some folks find a new platform for their web site, and 
>> someone has suggested codeIgniter as being simpler than Drupal or 
>> Wordpress. Anyone here have anything to say about it, good or bad? 
>> The site is small and light weight but it does have a database that 
>> needs to be managed.
>>
>> Thanks,
>> kc
>>
>> --
>> Karen Coyle
>> kco...@kcoyle.net http://kcoyle.net
>> ph: 1-510-540-7596
>> m: 1-510-435-8234
>> skype: kcoylenet
>



--
Karen Coyle
kco...@kcoyle.net http://kcoyle.net
ph: 1-510-540-7596
m: 1-510-435-8234
skype: kcoylenet


Re: [CODE4LIB] Experience with codeIgniter?

2011-12-14 Thread Walker, David
Are your 'folks' looking for a content management system, Karen?

As Mark just mentioned, CodeIgniter is a web application development framework 
-- that is, a set of reusable programming code that makes it easier for 
programmers to build applications for the web.  The key terms there being 
"programmers" and "build."

That is a very different kind of thing from  Drupal or WordPress, which are 
systems (that have already been built) to manage content for a website.  You 
don't have to be a programmer to use either of those.

--Dave
-
David Walker
Library Web Services Manager
California State University


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Mark 
Jordan
Sent: Wednesday, December 14, 2011 6:08 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Experience with codeIgniter?

Karen,

I used CI for a project last summer, and thought it was easy to learn if you 
had done some PHP programming before and were familiar with MVC architecture, 
well documented, and had a fairly rich feature set. However, my impression is 
that it had a very small plugin/module ecosystem compared to Drupal or 
Wordpress. Before recommending it, you should review the categories under 
'Contributions' at http://codeigniter.com/wiki to see if you can identify any 
glaring holes. But, overall, I'd say it's a pretty good PHP MVC framework (not 
that I've compared a lot of them).

Mark

Mark Jordan
Head of Library Systems
W.A.C. Bennett Library, Simon Fraser University Burnaby, British Columbia, V5A 
1S6, Canada
Voice: 778.782.5753 / Fax: 778.782.3023 / Skype: mark.jordan50 mjor...@sfu.ca

- Original Message -
> I'm helping some folks find a new platform for their web site, and 
> someone has suggested codeIgniter as being simpler than Drupal or 
> Wordpress. Anyone here have anything to say about it, good or bad? The 
> site is small and light weight but it does have a database that needs 
> to be managed.
> 
> Thanks,
> kc
> 
> --
> Karen Coyle
> kco...@kcoyle.net http://kcoyle.net
> ph: 1-510-540-7596
> m: 1-510-435-8234
> skype: kcoylenet


Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-06 Thread Walker, David
> I couldn't get json_encode() going on the server at work.

This usually means your server is running an older version of PHP.  If it's OS 
is RHEL 5, then you've likely got PHP 5.1.6 installed.

  http://php.net/manual/en/function.json-encode.php
  json_encode
  PHP 5 >= 5.2.0

--Dave
-
David Walker
Library Web Services Manager
California State University


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Nate 
Hill
Sent: Tuesday, December 06, 2011 8:18 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

I attached the app as it stands now.  There's something wrong w/ the regex 
matching in catscrape.php so only some of the images are coming through.

A bit more background info:

Someone said 'it's not that much data'.  Indeed it isn't, but that is because I 
intentionally gave myself an extremely simple data set to build/test with.  I'd 
anticipate more complex data sets in the future.

The .csv files are not generated automatically, we have a product called 
CollectionHQ that produces reports based on monthly data dumps from our ILS.  I 
was planning to create a folder that the people who run these reports can 
simply save the csv files to, and then the web app would just work without them 
having to think about it.

A bit of a side note, but I actually was taking the JSON approach briefly and 
it was working on my MAMP but for some reason I couldn't get
json_encode() going on the server at work.  I fiddled around w/ the .ini file a 
little while thinking I might need to do something there, got bored, and 
decided to take a different approach.

Also: should I be sweating the fact that basically every time someone mouses 
over one of these boxes they are hitting our library catalog with a query?  It 
struck me that this might be unwise.  But I don't know either way.

Thanks all.  Do with this what you will, even if that is nothing.  Just 
following the conversation has been enlightening.

Nate

On Tue, Dec 6, 2011 at 7:27 AM, Erik Hatcher  wrote:

> Again, with jrock... I was replying to the general "Ajax requests 
> returning HTML is outdated" theme, not to Nate's actual application.
>
> Certainly returning objects as code or data to a component (like, say, 
> SIMILE Timeline) is a reasonable use of data coming back from Ajax 
> requests, and covered in my "it depends" response :)
>
> A defender of the old?  Only in as much as the old is simpler, 
> cleaner, and leaner than all the new wheels being invented.  I'm 
> pragmatic, not dogmatic.
>
>Erik
>
>
> On Dec 6, 2011, at 09:34 , Godmar Back wrote:
>
> > On Tue, Dec 6, 2011 at 8:38 AM, Erik Hatcher 
> wrote:
> >
> >> I'm with jrock on this one.   But maybe I'm a luddite that didn't get
> the
> >> memo either (but I am credited for being one of the instrumental 
> >> folks
> in
> >> the Ajax world, heh - in one or more of the Ajax books out there, 
> >> us old timers called it "remote scripting").
> >>
> >>
> > On the in-jest rhetorical front, I'm wondering if referring to 
> > oneself as oldtimer helps in defending against insinuations that 
> > opposing technological change makes one a defender of the old ;-)
> >
> > But:
> >
> >
> >> What I hate hate hate about seeing JSON being returned from a 
> >> server for the browser to generate the view is stuff like:
> >>
> >>  string = "" + some_data_from_JSON + "";
> >>
> >> That embodies everything that is wrong about Ajax + JSON.
> >>
> >>
> > That's exactly why you use new libraries such as knockout.js, to 
> > avoid
> just
> > that. Client-side template engines with automatic data-bindings.
> >
> > Alternatively, AJAX frameworks use JSON and then interpret the 
> > returned objects as code. Take a look at the client/server traffic 
> > produced by ZK, for instance.
> >
> >
> >> As Jonathan said, the server is already generating dynamic HTML... 
> >> why have it return
> >
> >
> > It isn't. There is no already generating anything server, it's a new 
> > app Nate is writing. (Unless you count his work of the past two 
> > days). The dynamic HTML he's generating is heavily tailored to his 
> > JS. There's extremely tight coupling, which now exists across 
> > multiple files written
> in
> > multiple languages. Simply avoidable bad software engineering. 
> > That's not even making the computational cost argument that avoiding 
> > template processing on the server is cheaper. And with respect to 
> > Jonathan's argument of degradation, a degraded version of his app 
> > (presumably) would use  - or something like that, it'd look 
> > nothing like what's he showed us yesterday.
> >
> > Heh - the proof of the pudding is in the eating. Why don't we create 
> > 2 versions of Nate's app, one with mixed server/client - like the 
> > one he's completing now, and I create the client-side based one, and 
> > then we
> compare
> > side by side?  I'll work with Nate on that.
> >
> >  - Godmar
> 

Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-05 Thread Walker, David
I gotcha.  More information is, indeed, better. ;-)

So, on the PHP side, you just need to grab the term from the  query string, 
like this:

  $searchterm = $_GET['query'];

And then in your JavaScript code, you'll send an AJAX request, like:

  http://www.natehill.net/vizstuff/catscrape.php?query=Cooking

Is that what you're looking for?

--Dave

-
David Walker
Library Web Services Manager
California State University


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Nate 
Hill
Sent: Monday, December 05, 2011 3:00 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

As always, I provided too little information.  Dave, it's much more involved 
than that

I'm trying to make a kind of visual browser of popular materials from one of 
our branches from a .csv file.

In order to display book covers for a series of searches by keyword, I query 
the catalog, scrape out only the syndetics images, and then display 4 of them.  
The problem is that I've hardcoded in a search for 'Drawing', rather than 
dynamically pulling the correct term and putting it into the catalog query.

Here's the work in process, and I believe it will only work in Chrome right now.
http://www.natehill.net/vizstuff/donerightclasses.php

I may have a solution, Jason's idea got me part way there.  I looked all over 
the place for that little snippet he sent over!

Thanks!



On Mon, Dec 5, 2011 at 2:44 PM, Walker, David  wrote:

> > And I want to update 'Drawing' to be 'Cooking'  w/ a jQuery hover 
> > effect on the client side then I need to make an Ajax request, correct?
>
> What you probably want to do here, Nate, is simply output the PHP 
> variable in your HTML response, like this:
>
>  
>
> And then in your JavaScript code, you can manipulate the text through 
> the DOM like this:
>
>  $('#foo').html('Cooking');
>
> --Dave
>
> -
> David Walker
> Library Web Services Manager
> California State University
>
>
> -Original Message-
> From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
> Of Nate Hill
> Sent: Monday, December 05, 2011 2:09 PM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: [CODE4LIB] jQuery Ajax request to update a PHP variable
>
> If I have in my PHP script a variable...
>
> $searchterm = 'Drawing';
>
> And I want to update 'Drawing' to be 'Cooking'  w/ a jQuery hover 
> effect on the client side then I need to make an Ajax request, correct?
> What I can't figure out is what that is supposed to look like... 
> something like...
>
> $.ajax({
>  type: "POST",
>  url: "myfile.php",
>  data: "...not sure how to write what goes here to make it 'Cooking'..."
> });
>
> Any ideas?
>
>
> --
> Nate Hill
> nathanielh...@gmail.com
> http://www.natehill.net
>



--
Nate Hill
nathanielh...@gmail.com
http://www.natehill.net


Re: [CODE4LIB] jQuery Ajax request to update a PHP variable

2011-12-05 Thread Walker, David
> And I want to update 'Drawing' to be 'Cooking'  w/ a jQuery hover effect 
> on the client side then I need to make an Ajax request, correct?

What you probably want to do here, Nate, is simply output the PHP variable in 
your HTML response, like this:

  

And then in your JavaScript code, you can manipulate the text through the DOM 
like this:

  $('#foo').html('Cooking');

--Dave

-
David Walker
Library Web Services Manager
California State University


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Nate 
Hill
Sent: Monday, December 05, 2011 2:09 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] jQuery Ajax request to update a PHP variable

If I have in my PHP script a variable...

$searchterm = 'Drawing';

And I want to update 'Drawing' to be 'Cooking'  w/ a jQuery hover effect on the 
client side then I need to make an Ajax request, correct?
What I can't figure out is what that is supposed to look like... something 
like...

$.ajax({
  type: "POST",
  url: "myfile.php",
  data: "...not sure how to write what goes here to make it 'Cooking'..."
});

Any ideas?


--
Nate Hill
nathanielh...@gmail.com
http://www.natehill.net


Re: [CODE4LIB] marc-8

2011-10-24 Thread Walker, David
> I know yaz-marcdump changes the encoding bit in MARC
> leaders. Does it also convert MARC-8 characters to UTF-8?

Yes.  We use it for that purpose all the time.

--Dave

-
David Walker
Library Web Services Manager
California State University


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Eric 
Lease Morgan
Sent: Monday, October 24, 2011 11:39 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] marc-8

On Oct 24, 2011, at 2:34 PM, Doran, Michael D wrote:

>> In Perl, how do I specify MARC-8 when reading (decoding) and writing
>> (encoding) data?
> 
> You can't.  MARC-8 is a character set that is unknown to the operating 
> system.  Your best bet is to convert MARC-8-encoded records into UTF-8. 

/me throws his hands up in the air and screams!

Okay. How do I go about converting MARC-8 encoded records into UTF-8? I know 
yaz-marcdump changes the encoding bit in MARC leaders. Does it also convert 
MARC-8 characters to UTF-8? (I guess I could simply try it and see what 
happens.)

-- 
Eric Morgan


Re: [CODE4LIB] Examples of Web Service APIs in Academic & Public Libraries

2011-10-11 Thread Walker, David
We use a number of web services provided by our (often vendor-supplied) library 
systems.  Those include: Metalib, SFX, bX, and Voyager.  We've also worked with 
Ebsco, Summon, Primo/Primo Central, and Worldcat APIs.



--Dave

-
David Walker
Library Web Services Manager
California State University

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Michel, 
Jason Paul
Sent: Saturday, October 08, 2011 10:34 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] Examples of Web Service APIs in Academic & Public Libraries

Hello all,

I'm a lurker on this listserv and am interested in gaining some insight into 
your experiences of utilizing web service APIs in either an academic library or 
public library setting.

I'm writing a book for ALA Editions on the use of Web Service APIs in 
libraries.  Each chapter covers a specific API by delineating the 
technicalities of the API, discussing potential uses of the API in library 
settings, and step-by-step tutorials.

I'm already including examples of how my library (Miami University in Oxford, 
Ohio) are utilizing these APIs but would like to give the reader more examples 
from a variety of settings.

APIs covered in the book: Flickr, Vimeo, Google Charts, Twitter, Open Library, 
LibraryThing, Goodreads, OCLC.

So, what are you folks doing with APIs?

Thanks for any insight!

Kind regards,

Jason

--
Jason Paul Michel
User Experience Librarian
Miami University Libraries
Oxford, Ohio 45044
twitter:jpmichel


Re: [CODE4LIB] Code4Lib 2012 Seattle Update

2011-06-17 Thread Walker, David
> I doubt anyone is particularly wedded to the

> particularities of the current theme.

In fact, some of us dislike it entirely.  ;-)

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Jonathan 
Rochkind [rochk...@jhu.edu]
Sent: Wednesday, June 15, 2011 1:41 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Code4Lib 2012 Seattle Update

I doubt anyone is particularly wedded to the particularities of the
current theme. It probably doesn't matter, as long as you can put the
code4lib logo at the top with a banner-menu, if the theme changes, even
significantly. As long as it has pretty much the same functionality
exposed that it has now (and even that probably isn't that carefully
thought out).

On 6/15/2011 4:23 PM, Cary Gordon wrote:
> The theme looks like a minor hack of the Chameleon theme, so it should
> not be difficult to reproduce.
>
> On Wed, Jun 15, 2011 at 12:46 PM, Wick, Ryan  
> wrote:
>> Thanks for offering to help. I agree about the need to upgrade, and this is 
>> a pretty quiet time to do so.
>>
>> I'm guessing the theme will need to be done from scratch. It was already 
>> cobbled together.
>>
>> I'll try and send you some more information later today. If anyone else 
>> really wants in on this, let me know.
>>
>> Ryan Wick
>>
>> -Original Message-
>> From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Cary 
>> Gordon
>> Sent: Wednesday, June 15, 2011 12:31 PM
>> To: CODE4LIB@LISTSERV.ND.EDU
>> Subject: Re: [CODE4LIB] Code4Lib 2012 Seattle Update
>>
>> That's me!
>>
>> It is probably a good time to move this to a newer version, perhaps Drupal 
>> 7, if for no other reason than security. The only downside is that the theme 
>> would either need to be recreated or change. No biggy, really.
>>
>> If someone wants to send me the code and a DB dump, I will do it in my 
>> less-than-ample spare time.
>>
>> Cary
>>
>> On Wed, Jun 15, 2011 at 12:21 PM, Rob Casson  wrote:
>>> i've got admin rights on the code4lib drupal, so i went ahead and set the 
>>> alias:
>>>
>>>  http://code4lib.org/code4lib_2012_sponsorship
>>>
>>> cary: i'll look into getting you the correct privileges.  you're
>>> highermath, correct?
>>>
>>> cheers,
>>> rob
>>>
>>> On Wed, Jun 15, 2011 at 3:15 PM, Cary Gordon  wrote:
 In a modern version of Drupal, you can set a path alias for any page.
 Unfortunately, C4L does not appear to be in a modern version of
 Drupal. It looks like 4.7 or earlier.

 I would be happy to volunteer to help manage it.

 Cary

 On Wed, Jun 15, 2011 at 11:13 AM, Anjanette Young
   wrote:
> Hey Susan,
>
> Sweet! Language. Information. Social niceties.
>
> Here is the link to the 2012 sponsor page.
>
> http://code4lib.org/node/417
>
> (Anyone know how to make that a nicer url on drupal?)
>
> There seems to be discussion on expanding options for sponsorship,
> but the options on the page are standard.
> Thank you for the words.  Hope that it turns out that you able to
> travel to Seattle for the conference.
>
> --Anj
>
> On Wed, Jun 15, 2011 at 9:51 AM, Susan 
> Kanewrote:
>
>> Hi Anj,
>>
>> Nice to see your name again after meeting briefly at UW when you
>> were coming and I was leaving for Boston!
>>
>> I doubt I'll be able to attend the conference this year but I've
>> put the word out to the group of Ex Libris and Endeavor alumni that
>> I manage on LinkedIn.  Many people now work for other library technology 
>> companies.
>> Will let you know if anything useful comes back.
>>
>> Here's a copy of my promotional message, in case others on the list
>> want to try their own networks.  It might help our cause if someone
>> could add a link about sponsorships to the conference section of
>> the website.
>>
>> --- promotional blurb ---
>>
>> c4l -- code4lib is a unique conference that attracts a small but
>> influential group of library technologists each year. Next year's
>> conference is Feb 6-9,
>> 2012 in Seattle, WA. They are still seeking vendor sponsorships --
>> great visibility with influential folks for a fraction of the cost
>> of ALA!   If you can help, please contact me privately through
>> <>.
>>
>> http://code4lib.org/conference<
>> http://www.linkedin.com/redirect?url=http%3A%2F%2Fcode4lib%2Eorg%2F
>> conference&urlhash=-Iyx&_t=tracking_anet
>> -- promotional blurb ---
>>
>> Susan Kane
>> Harvard University OIS
>>
>
>
> --
> Anjanette Young | Systems Librarian
> University of Washington Libraries
> Box 352900 | Seattle, WA 98195
> Phone: 206.616.2867
>


 --
 Cary Gordon
 The C

Re: [CODE4LIB] What do you wish you had time to learn?

2011-04-26 Thread Walker, David
git

Feeling very un-cool using svn still. :-(

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Edward 
Iglesias [edwardigles...@gmail.com]
Sent: Tuesday, April 26, 2011 5:30 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] What do you wish you had time to learn?

Hello All,

I am doing a presentation at RILA (Rhode Island Library Association) on
changing skill sets for Systems Librarians.  I did a formal survey a while
back (if you participated, thank you) but this stuff changes so quickly I
thought I would ask this another way.  What do you wish you had time to
learn?

My list includes


CouchDB(NoSQL in general)
neo4j
nodejs
prototype
API Mashups
R

Don't be afraid to include Latin or Greek History.  I'm just going for a
snapshot of System angst at not knowing everything.

Thanks,


~
Edward Iglesias
Systems Librarian
Central Connecticut State University


Re: [CODE4LIB] Google Book Search and Millennium

2011-04-26 Thread Walker, David
I would prefer a more open place to collect these things, too -- Github sounds 
great.  I've got my own III hack [1], and I've kept it out of the IUG 
clearinghouse on purpose.

--Dave

[1] http://code.google.com/p/shrew/

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Gabriel 
Farrell [gsf...@gmail.com]
Sent: Tuesday, April 26, 2011 10:18 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Google Book Search and Millennium

For some reason I assumed Github would be a better spot for code
sharing than the IUG website, but I'm happy with any accessible place
to collect these.

On Tue, Apr 26, 2011 at 11:32 AM, Kyle Banerjee  wrote:
> IUG recently opened up stuff that has traditionally been passworded to
> everyone. You might ask if this area will be opened too as it may still be
> closed as an oversight.
>
> kyle
>
> On Tue, Apr 26, 2011 at 7:22 AM, Walker, David  wrote:
>
>> IUG has an area on their website called the Clearinghouse, which has a
>> number of scripts and other things.  It's behind a login, unfortunately,
>> although any IUG member can get access.
>>
>> --Dave
>>
>> ==
>> David Walker
>> Library Web Services Manager
>> California State University
>> http://xerxes.calstate.edu
>> 
>> From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Patrick
>> Berry [pbe...@gmail.com]
>> Sent: Tuesday, April 26, 2011 7:18 AM
>> To: CODE4LIB@LISTSERV.ND.EDU
>> Subject: Re: [CODE4LIB] Google Book Search and Millennium
>>
>> I think collecting and documenting these hacks would be a fabulous idea.  I
>> know I got a lot of help from a message sent to the IUG by one of our
>> librarians.  They may be way ahead of us (or not) but it will be a good
>> place to check.
>>
>> On Mon, Apr 25, 2011 at 8:36 PM, Gabriel Farrell  wrote:
>>
>> > Nice work, Patrick. You reminded me I never mentioned on this list the
>> > III Refworks Export script I put up on GitHub (see the code for props
>> > to those who did most of the work). It's at
>> > https://github.com/gsf/refworksexport. Maybe we should start
>> > collecting these under a "iiihacks" GitHub org.
>> >
>> > On Mon, Apr 25, 2011 at 5:48 PM, Patrick Berry  wrote:
>> > > Hi,
>> > >
>> > > We're working on "integrating" links to Google Books from Millennium.
>> >  I'm
>> > > not a fan of rewritting things from scratch, so I've borrowed heavily
>> > from
>> > > those that already have this working.  Props to the gbsclasses.js
>> folks,
>> > > MSU, and Temple.  One thing I noticed is that IE 9 (perhaps earlier
>> > versions
>> > > as well) do not work with the code in use at MSU and Temple on the
>> > > bib_display.html templates.
>> > >
>> > > I've done some clean-up on a static example:
>> > > http://www.csuchico.edu/~pberry/google-books/
>> > >
>> > > Questions? Comments? DMCA notices?
>> > >
>> > > Pat in Chico
>> > >
>> >
>>
>
>
>
> --
> --
> Kyle Banerjee
> Digital Services Program Manager
> Orbis Cascade Alliance
> baner...@uoregon.edu / 503.877.9773
>


Re: [CODE4LIB] Google Book Search and Millennium

2011-04-26 Thread Walker, David
IUG has an area on their website called the Clearinghouse, which has a number 
of scripts and other things.  It's behind a login, unfortunately, although any 
IUG member can get access.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Patrick Berry 
[pbe...@gmail.com]
Sent: Tuesday, April 26, 2011 7:18 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Google Book Search and Millennium

I think collecting and documenting these hacks would be a fabulous idea.  I
know I got a lot of help from a message sent to the IUG by one of our
librarians.  They may be way ahead of us (or not) but it will be a good
place to check.

On Mon, Apr 25, 2011 at 8:36 PM, Gabriel Farrell  wrote:

> Nice work, Patrick. You reminded me I never mentioned on this list the
> III Refworks Export script I put up on GitHub (see the code for props
> to those who did most of the work). It's at
> https://github.com/gsf/refworksexport. Maybe we should start
> collecting these under a "iiihacks" GitHub org.
>
> On Mon, Apr 25, 2011 at 5:48 PM, Patrick Berry  wrote:
> > Hi,
> >
> > We're working on "integrating" links to Google Books from Millennium.
>  I'm
> > not a fan of rewritting things from scratch, so I've borrowed heavily
> from
> > those that already have this working.  Props to the gbsclasses.js folks,
> > MSU, and Temple.  One thing I noticed is that IE 9 (perhaps earlier
> versions
> > as well) do not work with the code in use at MSU and Temple on the
> > bib_display.html templates.
> >
> > I've done some clean-up on a static example:
> > http://www.csuchico.edu/~pberry/google-books/
> >
> > Questions? Comments? DMCA notices?
> >
> > Pat in Chico
> >
>


Re: [CODE4LIB] geo-locating email domains

2011-03-24 Thread Walker, David
Oh, I'm sure there is *a* contingent in the Bay Area.

But Roy threw down the gauntlet, saying NorCal was more into Code4lib than 
SoCal.  I ain't letting no gmail accounts inflate his numbers. ;-)

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Eric Lease 
Morgan [emor...@nd.edu]
Sent: Thursday, March 24, 2011 10:08 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] geo-locating email domains

On Mar 24, 2011, at 1:02 PM, Walker, David wrote:

>>> http://bit.ly/hdL55U
>
> But doesn't the large circle over the Bay Area come from all the gmail 
> accounts hosted in Mountain View?

No, not exactly.

Yes, much of the area is centered around Mountain View (Gmail), but as you zoom 
in you see there is a contingent of folks in the Bay Area -- 
http://bit.ly/hZdAPN

--
Eric Morgan


Re: [CODE4LIB] geo-locating email domains

2011-03-24 Thread Walker, David
That is a good idea, Roy.

But doesn't the large circle over the Bay Area come from all the gmail accounts 
hosted in Mountain View?

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Eric Lease 
Morgan [emor...@nd.edu]
Sent: Thursday, March 24, 2011 9:47 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] geo-locating email domains

On Mar 24, 2011, at 12:45 PM, Roy Tennant wrote:

>> http://bit.ly/hdL55U
>
> Wow, who knew there was such an epicenter of subscribers in Northern
> California, and that we would eclipse our Southern California
> colleagues? Maybe we need to hold a regional Code4Lib here in the Bay
> Area.

Actually, Roy, that sounds like a good idea.

--
Eric Morgan


Re: [CODE4LIB] Simple Web-based Dublin Core search engine?

2011-03-16 Thread Walker, David
I wonder if you might be able to load the file in PKP Harvester.

  http://pkp.sfu.ca/?q=harvester

It should already be able to parse and index OAI-DC, and would give you a nice, 
simple interface.  It's based on a straight LAMP stack, which would make it 
easier to get up and running than some of the other suggestions so far.

It's designed to harvest rather than load data, but that has got to be a fairly 
simple thing to workaround.  I've never done this myself, so I could be 
entirely wrong.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Edward M. 
Corrado [ecorr...@ecorrado.us]
Sent: Wednesday, March 16, 2011 8:00 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] Simple Web-based Dublin Core search engine?

Hi,

I [will soon] have a small set (< 1000 records) of Dublin Core
metadata published in OAI_DC format that I want to be searchable via a
Web browser.  Normally we would use Ex Libris's Primo for this, but
this particular set of data may have some confidential information and
our repository only has minimal built in search functions. While we
still may go with Primo for these records, I am looking for at other
possibilities. The requirements as I see them are:

1) Can ingest records in OAI_DC format
2) Allow remote end-users who are familiar with the collection search
these ingest records via a Web browser.
3)Search should be keyword anywhere or individual fields although it
does not need to have every whizzbang feature out there. In other
words, basic search feature are fine.
4) Should support the ability to link to the display copy in our
repository (probably goes without saying)
5) Should be simple to install and maintain (Thus, at least in my
mind, eliminating something like Blacklight)
6) Preferably a LAMP application although a Windows server based
solution is a possibility as well
7) Preferably Open Source, or at least no- or low-cost

I haven't been able to find anything searching the Web, but it seems
like something people may have done before. Before I re-invent the
wheel or shoe-horn something together, does anyone have any
suggestions?

Edward


Re: [CODE4LIB] dealing with Summon

2011-03-02 Thread Walker, David
Sorry, it wasn't my intention to derail the conversation, or anything.  Just 
wanted to find out -- for my own purposes -- if there is also a Summon 
listserv.  I'll go Google that, though.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Godmar Back [god...@gmail.com]
Sent: Wednesday, March 02, 2011 8:38 AM
To: Code for Libraries
Cc: Walker, David
Subject: Re: [CODE4LIB] dealing with Summon

On Wed, Mar 2, 2011 at 11:36 AM, Walker, David  wrote:
> Just out of curiosity, is there a Summon (API) developer listserv?  Should 
> there be?

Yes, there is - I'm waiting for my subscription there to be approved.

Like I said at the beginning of this thread, this is only tangentially
a Code4Lib issue, and certainly the details aren't.  But perhaps the
general problem is (?)

 - Godmar


Re: [CODE4LIB] dealing with Summon

2011-03-02 Thread Walker, David
Just out of curiosity, is there a Summon (API) developer listserv?  Should 
there be?

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Godmar Back 
[god...@gmail.com]
Sent: Wednesday, March 02, 2011 8:30 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] dealing with Summon

On Wed, Mar 2, 2011 at 11:12 AM, Roy Tennant  wrote:
> Godmar,
> I'm surprised you're asking this. Most of the questions you want
> answered could be answered by a basic programming construct: an
> if-then-else statement and a simple decision about what you want to
> use in your specific application (for example, do you prefer "text"
> with the period, or not?). About the only question that such a
> solution wouldn't deal with is "which fields are derived from which
> others", which strikes me as superfluous to your application if you
> know a hierarchy of preference. But perhaps I'm missing something
> here.

I'm not asking how to code it, I'm asking for the algorithm I should
use, given the fact that I'm not familiar with the provenance and
status of the data Summon returns (which, I understand, is a mixture
of original, harvested data, and "cleaned-up", processed data.)

Can you suggest such an algorithm, given the fact that each of the 8
elements I showed in the example (PublicationDateYear,
PublicationDateDecade, PublicationDate, PublicationDateCentury,
PublicationDate_xml.text, PublicationDate_xml.day,
PublicationDate_xml.month, PublicationDate_xml.year is optional?  But
wait  I think I've also seen records where there is a
PublicationDateMonth, and records where some values have arrays of
length > 1.

Can you suggest, or at least outline, such an algorithm?

It would be helpful to know, for instance, if the presence of a
PublicationDate_xml field supplants any other PublicationDate* fields
(does it?)  If a PublicationDate_xml field is absent, which field
would I want to look at next?  Is PublicationDate more reliable than a
combination of PublicationDateYear and PublicationDateMonth (and
perhaps PublicationDateDay if it exists?)?

If the PublicationDate_xml is present, then: should I prefer the .text
option?  What's the significance of that dot? Is it spurious, like the
identifier you mentioned you find in raw MARC records?  If not, what,
if anything, is known about the presence of the other fields?  What if
multiple fields are given in an array?  Is the ordering significant
(e.g., the first one is more trustworthy?) Or should I sort them based
on a heuristics?  (e.g., if "20100523" and "201005" is given, prefer
the former?)  What if the data is contradictory?

These are the questions I'm seeking answers to; I know that those of
you who have coded their own Summon front-ends must have faced the
same questions when implementing their record displays.

 - Godmar


Re: [CODE4LIB] exporting marc records from iii

2011-02-18 Thread Walker, David
Hey Eric,

Is this an Innovative system you have access to (at Notre Dame)?  And do you 
need to do this one time only, or does it need to be automated and ongoing?

If it's a system you have access to, and you only need it once, then you might 
just have one of the staff there use the Millennium client to get these 
records.  Innovative provides modules (Create Lists and Data Exchange) to 
search for and export MARC records.  There is, of course, documentation for 
that.

If it's an external system, or you want to automate the above task, then that's 
a much trickier question.  We have some code here that might help with that, 
but I don't want to overly-complicate your task.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Eric Lease 
Morgan [emor...@nd.edu]
Sent: Friday, February 18, 2011 7:48 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] exporting marc records from iii

How does a person go about exporting MARC records from a III system?

As you may or may not know, I spend a lot of my time developing a thing 
colloquially called  the "Catholic Portal". It uses VUFind under the hood, and 
it requires me to ingest bibliographic data from a myriad of libraries.

Suppose the records I desire have the letters "crra" saved in MARC field 590$a. 
What is the process for connecting a III system, searching for "crra" in 590$a, 
and saving the result as a file of MARC records? Is there some sort of 
documentation I can read that will help me out in this regard?

--
Eric Lease Morgan


Re: [CODE4LIB] MARCXML - What is it for?

2010-10-27 Thread Walker, David
> Crosswalking doesn't hold water as a justification for MARCXML.

To be fair, though, most of us have simpler cross walking needs than OCLC.  

And if I need to go from binary MARC to some XML schema (which I sometimes do), 
then MARC-XML and the XSLT style sheets at LOC seem like a pretty good starting 
point to me.  Better than starting from scratch.

Which isn't to say that that approach is always the right one for every 
project. I very much agree with MJ: If it works for you, use it.  If not, don't.

But if someone else has a better, general purpose solution to this problem, 
then by all means open source that puppy and let the rest of us have at it!

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Smith,Devon 
[smit...@oclc.org]
Sent: Tuesday, October 26, 2010 7:44 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] MARCXML - What is it for?

> One way is to first transform the MARC into MARC-XML.  Then you can
use XSLT to crosswalk the MARC-XML
> into that other schema.  Very handy.

> Your criticisms of MARC-XML all seem to presume that MARC-XML is the
goal, the end point in the process.
> But MARC-XML is really better seen as a utility, a middle step between
binary MARC and the real goal,
> which is some other "useful and interesting" XML schema.

Unless "useful and interesting" is a euphemism for Dublin Core, then
using XSLT for crosswalking is not really an option. Well, not a good
option. On the other end of the spectrum, assume Onix for "useful and
interesting" and XSLT simply won't work.

Crosswalking doesn't hold water as a justification for MARCXML.

/dev
--
Devon Smith
Consulting Software Engineer
OCLC Research
http://www.oclc.org/research/people/smith.htm




-Original Message-
From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of
Walker, David
Sent: Monday, October 25, 2010 8:57 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] MARCXML - What is it for?

> b) expanding it to be actual useful and interesting.

But here I think you've missed the very utility of MARC-XML.

Let's say you have a binary MARC file (the kind that comes out of an
ILS) and want to transform that into MODS, Dublin Core, or maybe some
other XML schema.

How would you do that?

One way is to first transform the MARC into MARC-XML.  Then you can use
XSLT to crosswalk the MARC-XML into that other schema.  Very handy.

Your criticisms of MARC-XML all seem to presume that MARC-XML is the
goal, the end point in the process.  But MARC-XML is really better seen
as a utility, a middle step between binary MARC and the real goal, which
is some other "useful and interesting" XML schema.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of
Alexander Johannesen [alexander.johanne...@gmail.com]
Sent: Monday, October 25, 2010 12:38 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] MARCXML - What is it for?

Hiya,

On Tue, Oct 26, 2010 at 6:26 AM, Nate Vack  wrote:
> Switching to an XML format doesn't help with that at all.

I'm willing to take it further and say that MARCXML was the worst
thing the library world ever did. Some might argue it was a good first
step, and that it was better with something rather than nothing, to
which I respond ;

Poppycock!

MARCXML is nothing short of evil. Not only does it goes against every
principal of good XML anywhere (don't rely on whitespace, structure
over code, namespace conventions, identity management, document
control, separation of entities and properties, and on and on), it
breaks the ontological commitment that a better treatment of the MARC
data could bring, deterring people from actually a) using the darn
thing as anything but a bare minimal crutch, and b) expanding it to be
actual useful and interesting.

The quicker the library world can get rid of this monstrosity, the
better, although I doubt that will ever happen; it will hang around
like a foul stench for as long as there is MARC in the world. A long
time. A long sad time.

A few extra notes;
   http://shelterit.blogspot.com/2008/09/marcxml-beast-of-burden.html

Can you tell I'm not a fan? :)


Kind regards,

Alex
--
 Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic
Maps
--- http://shelter.nu/blog/
--
-- http://www.google.com/profiles/alexander.johannesen
---


Re: [CODE4LIB] MARCXML - What is it for?

2010-10-27 Thread Walker, David
> I've been involved in several projects lambasted 
> because managers think MARCXML is solving 
> some imaginary problem

It seems to me that this is really the heart of your argument.  You had this 
experience, and now are projecting the opinions of these managers onto "lots of 
people in the library world."

I've worked in libraries for nearly a decade, and have never met anyone 
(manager or otherwise) who held the belief that XML in general, or MARC-XML in 
particular, somehow magically solves all metadata problems.  

I guess our two experiences cancel each other out, then.  And, ultimately, none 
of that has anything to do with MARC-XML itself. 

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Alexander 
Johannesen [alexander.johanne...@gmail.com]
Sent: Monday, October 25, 2010 7:10 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] MARCXML - What is it for?

On Tue, Oct 26, 2010 at 12:48 PM, Bill Dueber  wrote:
> Here, I think you're guilty of radically underestimating "lots of people
> around the library world." No one thinks MARC is a good solution to
> our modern problems, and no one who actually knows what MARC
> is has trouble understanding MARC-XML as an XML serialization of
> the same old data -- certainly not anyone capable of meaningful
> contribution to work on an alternative.

Slow down, Tex. "Lots of people in the library world" is not the same
as developers, or even good developers, or even good XML developers,
or even good XML developers who knows what the document model imposes
to a data-centric approach.

> The problem we're dealing with is *hard*. Mind-numbingly hard.

This is no justification for not doing things better. (And I'd love to
know what the hard bits are; always interesting to hear from various
people as to what they think are the *real* problems of library
problems, as opposed to any other problem they have)

> The library world has several generations of infrastructure built
> around MARC (by which I mean AACR2), and devising data
> structures and standards that are a big enough improvement over
>  MARC to warrant replacing all that infrastructure is an engineering
>  and political nightmare.

Political? For sure. Engineering? Not so much. This is just that whole
"blinded by MARC" issue that keeps cropping up from time to time, and
rightly so; it is truly a beast - at least the way we have come to
know it through AACR2 and all its friends and its death-defying focus
on all things bibliographic - that has paralyzed library innovation,
probably to the point of making libraries almost irrelevant to the
world.

> I'm happy to take potshots at the RDA stuff from the sidelines, but I never
> forget that I'm on the sidelines, and that the people active in the game are
> among the best and brightest we have to offer, working on a problem that
>  invariably seems more intractable the deeper in you go.

Well, that's a pretty scary sentence, for all sorts of reasons, but I
think I shall not go there.

> If you think MARC-XML is some sort of an actual problem

What, because you don't agree with me the problem doesn't exist? :)

> and that people
> just need to be shouted at to realize that and do something about it, then,
> well, I think you're just plain wrong.

Fair enough, although you seem to be under the assumption that all of
the stuff I'm saying is a figment of my imagination (I've been
involved in several projects lambasted because managers think MARCXML
is solving some imaginary problem; this is not bullshit, but pain and
suffering from the battlefields of library development), that I'm not
one of those developers (or one of you, although judging from this
discussion it's clear that I am not), that the things I say somehow
doesn't apply because you don't agree with, umm, what I'm assuming is
my somewhat direct approach to stating my heretic opinions.


Alex
--
 Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps
--- http://shelter.nu/blog/ --
-- http://www.google.com/profiles/alexander.johannesen ---


Re: [CODE4LIB] MARCXML - What is it for?

2010-10-25 Thread Walker, David
> b) expanding it to be actual useful and interesting.

But here I think you've missed the very utility of MARC-XML.

Let's say you have a binary MARC file (the kind that comes out of an ILS) and 
want to transform that into MODS, Dublin Core, or maybe some other XML schema.  

How would you do that?  

One way is to first transform the MARC into MARC-XML.  Then you can use XSLT to 
crosswalk the MARC-XML into that other schema.  Very handy.

Your criticisms of MARC-XML all seem to presume that MARC-XML is the goal, the 
end point in the process.  But MARC-XML is really better seen as a utility, a 
middle step between binary MARC and the real goal, which is some other "useful 
and interesting" XML schema.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Alexander 
Johannesen [alexander.johanne...@gmail.com]
Sent: Monday, October 25, 2010 12:38 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] MARCXML - What is it for?

Hiya,

On Tue, Oct 26, 2010 at 6:26 AM, Nate Vack  wrote:
> Switching to an XML format doesn't help with that at all.

I'm willing to take it further and say that MARCXML was the worst
thing the library world ever did. Some might argue it was a good first
step, and that it was better with something rather than nothing, to
which I respond ;

Poppycock!

MARCXML is nothing short of evil. Not only does it goes against every
principal of good XML anywhere (don't rely on whitespace, structure
over code, namespace conventions, identity management, document
control, separation of entities and properties, and on and on), it
breaks the ontological commitment that a better treatment of the MARC
data could bring, deterring people from actually a) using the darn
thing as anything but a bare minimal crutch, and b) expanding it to be
actual useful and interesting.

The quicker the library world can get rid of this monstrosity, the
better, although I doubt that will ever happen; it will hang around
like a foul stench for as long as there is MARC in the world. A long
time. A long sad time.

A few extra notes;
   http://shelterit.blogspot.com/2008/09/marcxml-beast-of-burden.html

Can you tell I'm not a fan? :)


Kind regards,

Alex
--
 Project Wrangler, SOA, Information Alchemist, UX, RESTafarian, Topic Maps
--- http://shelter.nu/blog/ --
-- http://www.google.com/profiles/alexander.johannesen ---


Re: [CODE4LIB] Help with DLF-ILS GetAvailability

2010-10-21 Thread Walker, David
> Yes - my reading was that dlf:holdings was for pure 'holdings' 
> as opposed to 'availability'.

I would agree with Jonathan that putting a summary of item availability in 
 is not an abuse.

For example, ISO Holdings -- one of the schemas the DLF-ILS documents suggests 
using here -- has elements for things like:

  

  

Very much the kind of summary information you are using.  Those are different 
from it's  element, which describes individual items.

So IMO it wouldn't be (much of) a stretch to express this in 
dlf:simpleavailability instead.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jonathan 
Rochkind [rochk...@jhu.edu]
Sent: Thursday, October 21, 2010 1:26 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Help with DLF-ILS GetAvailability

I don't think that's an abuse.  I consider  to be for
information about a "holdingset", or some collection of "items", while
 is for information about an individual item.

I think regardless of what you do you are being over-optimistic in
thinking that if you just "do dlf", your stuff will interchangeable with
any other clients or servers "doing dlf". The spec is way too open-ended
for that, it leaves a whole bunch of details not specified and up to the
implementer.  For better or worse. I made more comments about this in
the blog post I referenced earlier.

Jonathan

Owen Stephens wrote:
> Thanks Dave,
>
> Yes - my reading was that dlf:holdings was for pure 'holdings' as opposed to
> 'availability'. We could put the simpleavailability in there I guess but as
> you say since we are controlling both ends then there doesn't seem any point
> in abusing it like that. The downside is we'd hoped to do something that
> could be taken by other sites - the original plan was to use the Juice
> framework - developed by Talis using jQuery to parse a standard availability
> format so that this could then be applied easily in other environments.
> Obviously we can still achieve the outcome we need for the immediate
> requirements of the project by using a custom format.
>
> Thanks again
>
> Owen
>
>
> On Thu, Oct 21, 2010 at 4:28 PM, Walker, David  wrote:
>
>
>> Hey Owen,
>>
>> Seems like the you could use the  element to hold this kind
>> of individual library information.
>>
>> The DLF-ILS documentation doesn't seem to think that you would use
>> dlf:simpleavailability here, though, but rather MARC or ISO holdings
>> schemas.
>>
>> But if you're controlling both ends of the communication, I don't know if
>> it really matters.
>>
>> --Dave
>>
>> ==
>> David Walker
>> Library Web Services Manager
>> California State University
>> http://xerxes.calstate.edu
>> 
>> From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Owen
>> Stephens [o...@ostephens.com]
>> Sent: Wednesday, October 20, 2010 12:22 PM
>> To: CODE4LIB@LISTSERV.ND.EDU
>> Subject: [CODE4LIB] Help with DLF-ILS GetAvailability
>>
>> I'm working with the University of Oxford to look at integrating some
>> library services into their VLE/Learning Management System (Sakai). One of
>> the services is something that will give availability for items on a reading
>> list in the VLE (the Sakai 'Citation Helper'), and I'm looking at the
>> DLF-ILS GetAvailability specification to achieve this.
>>
>> For physical items, the availability information I was hoping to use is
>> expressed at the level of a physical collection. For example, if several
>> college libraries within the University I have aggregated information that
>> tells me the availability of the item in each of the college libraries.
>> However, I don't have item level information.
>>
>> I can see how I can use simpleavailability to say over the entire
>> institution whether (e.g.) a book is available or not. However, I'm not
>> clear I can express this in a more granular way (say availability on a
>> library by library basis) except by going to item level. Also although it
>> seems you can express multiple locations in simpleavailability, and multiple
>> availabilitymsg, there is no way I can see to link these, so although I
>> could list each location OK, I can't attach an availabilitymsg to a specific
>> location (unless I only express one location).
>>
>> Am I missing something, or is my interpretation correct?
>>
>> Any other suggestions?
>>
>> Thanks,
>>
>> Owen
>>
>> PS also looked at DAIA which I like, but this (as far as I can tell) only
>> allows availabitlity to be specified at the level of items
>>
>>
>> Owen Stephens
>> Owen Stephens Consulting
>> Web: http://www.ostephens.com
>> Email: o...@ostephens.com
>> Telephone: 0121 288 6936
>>
>>
>
>
>
>


Re: [CODE4LIB] Help with DLF-ILS GetAvailability

2010-10-21 Thread Walker, David
Hey Owen,

Seems like the you could use the  element to hold this kind of 
individual library information.

The DLF-ILS documentation doesn't seem to think that you would use 
dlf:simpleavailability here, though, but rather MARC or ISO holdings schemas.

But if you're controlling both ends of the communication, I don't know if it 
really matters.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Owen Stephens 
[o...@ostephens.com]
Sent: Wednesday, October 20, 2010 12:22 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] Help with DLF-ILS GetAvailability

I'm working with the University of Oxford to look at integrating some library 
services into their VLE/Learning Management System (Sakai). One of the services 
is something that will give availability for items on a reading list in the VLE 
(the Sakai 'Citation Helper'), and I'm looking at the DLF-ILS GetAvailability 
specification to achieve this.

For physical items, the availability information I was hoping to use is 
expressed at the level of a physical collection. For example, if several 
college libraries within the University I have aggregated information that 
tells me the availability of the item in each of the college libraries. 
However, I don't have item level information.

I can see how I can use simpleavailability to say over the entire institution 
whether (e.g.) a book is available or not. However, I'm not clear I can express 
this in a more granular way (say availability on a library by library basis) 
except by going to item level. Also although it seems you can express multiple 
locations in simpleavailability, and multiple availabilitymsg, there is no way 
I can see to link these, so although I could list each location OK, I can't 
attach an availabilitymsg to a specific location (unless I only express one 
location).

Am I missing something, or is my interpretation correct?

Any other suggestions?

Thanks,

Owen

PS also looked at DAIA which I like, but this (as far as I can tell) only 
allows availabitlity to be specified at the level of items


Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: o...@ostephens.com
Telephone: 0121 288 6936


[CODE4LIB] Old MARC 007 field

2010-09-13 Thread Walker, David
I have some old MARC records that appear to have 007 fields that follow the 
pre-1981 structure.  

The LOC MARC pages mention this older structure in a note at the bottom of the 
page [1], but don't give a whole lot of information on it.

I'm curious if others have run into this, and what you've done to work around 
it?  I'm using the 007 -- in part anyway -- to determine the format of the item 
it describes.

--Dave

[1] http://www.loc.gov/marc/bibliographic/bd007.html

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu


Re: [CODE4LIB] usability question: searching for a database (not in a database)

2010-07-30 Thread Walker, David
I think text above or before the search box is bound to be ignored.  But if you 
change the wording of the search *button*, you might get people to notice it 
more.  

Something like:

   [ search box ] [ button: locate databases ]

 . . . or you know, something better worded than that.  And then, as Joe 
suggested, some note in the results explaining that this is not what you think 
it is.

You might do something similar for your journal list, since that's another 
place where users think they are searching inside the container (i.e., for 
articles) instead of for containers themselves (i.e., the names of journals).

And putting those with an actual search box for articles on the same page 
(search for articles, search for journals, search for databases) would clarify 
the purpose even more.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Sarah Weeks 
[sarahweeks...@gmail.com]
Sent: Friday, July 30, 2010 5:22 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] usability question: searching for a database (not in a 
database)

Long time lurker, first time poster.
I have a little usability question I was hoping someone could give me advice
on.
I'm updating the databases page on our website and we'd like to add a search
box that would search certain fields we have set up for our databases
(title, vendor, etc...) so that even if someone doesn't remember the first
word in the title, they can quickly find the database they're looking
through without having to scroll through the whole A-Z list.
My question is: if we add a search box to our main database page, how can we
make it clear that it's for searching FOR a database and not IN a database?
Some of the choices we've considered are:
Seach for a database:
Search this list:
Don't remember the name of the database? Search here:

I'm not feeling convinced by any of them. I'm afraid when people see a
search box they're not going to bother reading the text but will just assume
it's a federated search tool.

Any advice?

-Sarah Beth

--
Sarah Beth Weeks
Interim Head Librarian of Technical Services and Systems
St Olaf College Rolvaag Memorial Library
1510 St. Olaf Avenue
Northfield, MN 55057
507-786-3453 (office)
717-504-0182 (cell)


Re: [CODE4LIB] DIY aggregate index

2010-06-30 Thread Walker, David
You might also need to factor in an extra server or three (in the cloud or 
otherwise) into that equation, given that we're talking 100s of millions of 
records that will need to be indexed.

> companies like iii and Ex Libris are the only ones with
> enough clout to negotiate access

I don't think III is doing any kind of aggregated indexing, hence their 
decision to try and leverage APIs.  I could be wrong.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jonathan 
Rochkind [rochk...@jhu.edu]
Sent: Wednesday, June 30, 2010 1:15 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] DIY aggregate index

Cory Rockliff wrote:
> Do libraries opt for these commercial 'pre-indexed' services simply
> because they're a good value proposition compared to all the work of
> indexing multiple resources from multiple vendors into one local index,
> or is it that companies like iii and Ex Libris are the only ones with
> enough clout to negotiate access to otherwise-unavailable database
> vendors' content?
>
A little bit of both, I think. A library probably _could_ negotiate
access to that content... but it would be a heck of a lot of work. When
the staff time to negotiations come in, it becomes a good value
proposition, regardless of how much the licensing would cost you.  And
yeah, then the staff time to actually ingest and normalize and
troubleshoot data-flows for all that stuff on the regular basis -- I've
heard stories of libraries that tried to do that in the early 90s and it
was nightmarish.

So, actually, I guess i've arrived at convincing myself it's mostly
"good value proposition", in that a library probably can't afford to do
that on their own, with or without licensing issues.

But I'd really love to see you try anyway, maybe I'm wrong. :)

> Can I assume that if a database vendor has exposed their content to me
> as a subscriber, whether via z39.50 or a web service or whatever, that
> I'm free to cache and index all that metadata locally if I so choose? Is
> this something to be negotiated on a vendor-by-vendor basis, or is it an
> impossibility?
>
I doubt you can assume that.  I don't think it's an impossibility.

Jonathan


Re: [CODE4LIB] Innovative's Synergy

2010-06-30 Thread Walker, David
Hi Cindy,

Both the Ebsco and Proquest APIs are definitely available to customers.  We're 
using the Ebsco one in our Xerxes application, for example.  ( I'll send you a 
link off-list, Cindy.)

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Cindy Harper 
[char...@colgate.edu]
Sent: Wednesday, June 30, 2010 9:11 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] Innovative's Synergy

Hi All - III is touting their web-services based Synergy product as having
the efficiency of a pre-indexed service and the timeliness of a just-in-time
service.  Does anyone know if the agreements they have made with database
vendors to use these web services preclude an organization developing an
open-source client to take advantage of those web services?  Just curious.
I suppose I should direct my question to EBSCO and Proquest directly.


Cindy Harper, Systems Librarian
Colgate University Libraries
char...@colgate.edu
315-228-7363


Re: [CODE4LIB] WorldCat as an OpenURL endpoint ?

2010-06-15 Thread Walker, David
> It seems like the more productive path if the goal of a user is
> simply to locate a copy, where ever it is held.

But I don't think users have *locating a copy* as their goal.  Rather, I think 
their goal is to *get their hands on the book*.

If I discover a book via COINs, and you drop me off at Worldcat.org, that 
allows me to see which libraries own the book.  But, unless I happen to be 
affiliated with those institutions, that's kinda useless information.  I have 
no real way of actually getting the book itself.

If, instead, you drop me off at your institution's link resolver menu, and 
provide me an ILL option in the event you don't have the book, the library can 
get the book for me, which is really my *goal*.

That seems like the more productive path, IMO.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Tom Keays 
[tomke...@gmail.com]
Sent: Tuesday, June 15, 2010 8:43 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] WorldCat as an OpenURL endpoint ?

On Mon, Jun 14, 2010 at 3:47 PM, Jonathan Rochkind  wrote:

> The trick here is that traditional library metadata practices make it _very
> hard_ to tell if a _specific volume/issue_ is held by a given library.  And
> those are the most common use cases for OpenURL.
>

Yep. That's true even for individual library's with link resolvers. OCLC is
not going to be able to solve that particular issue until the local
libraries do.


> If you just want to get to the title level (for a journal or a book), you
> can easily write your own thing that takes an OpenURL, and either just
> redirects straight to worldcat.org on isbn/lccn/oclcnum, or actually does
> a WorldCat API lookup to ensure the record exists first and/or looks up on
> author/title/etc too.
>

I was mainly thinking of sources that use COinS. If you have a rarely held
book, for instance, then OpenURLs resolved against random institutional
endpoints are going to mostly be unproductive. However, a "union" catalog
such as OCLC already has the information about libraries in the system that
own it. It seems like the more productive path if the goal of a user is
simply to locate a copy, where ever it is held.


> Umlaut already includes the 'naive' "just link to worldcat.org based on
> isbn, oclcnum, or lccn" approach, functionality that was written before the
> worldcat api exists. That is, Umlaut takes an incoming OpenURL, and provides
> the user with a link to a worldcat record based on isbn, oclcnum, or lccn.
>

Many institutions have chosen to do this. MPOW, however, represents a
counter-example and do not link out to OCLC.

Tom


Re: [CODE4LIB] SRU indexes for Aleph

2010-06-09 Thread Walker, David
> Are you saying that Aleph has no native SRU 
> capability and YAZ is the only SRU access to it?

Someone correct me if I am wrong, but I'm pretty sure Alpeh *doesn't* have an 
SRU interface.  There's no documentation for one on the Ex Libris site anyway.

It does have some web services, and it's possible the library you are accessing 
here, Ralph, has written an SRU wrapper around those.  YAZ Proxy in front of 
the Alpeh Z39 server also a possibility, as Ere mentioned.

Probably best to talk to the specific library here.  It's very likely this is 
their own creation.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of LeVan,Ralph 
[le...@oclc.org]
Sent: Wednesday, June 09, 2010 8:29 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] SRU indexes for Aleph

-Original Message-
From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of Ere 
Maijala
Sent: Wednesday, June 09, 2010 4:11 AM

> it's probably a custom YAZ Proxy. This is as far as I know the default
> mapping to Z39.50 (of course it could have been modified locally):

Really?  I find it hard to believe that the Index Data folks don't know how to 
make an Explain record.

Are you saying that Aleph has no native SRU capability and YAZ is the only SRU 
access to it?

Thanks!

Ralph


Re: [CODE4LIB] Multi-server Search Engine response times: was - OASIS SRU and CQL, access to most-current drafts

2010-05-19 Thread Walker, David
 things quickly than wait for relevance ranking. I suspect partly (can of 
> worms coming) because the existing ranking schemes don't make a lot of 
> difference (ducks quickly).
>
> Peter
>
> Peter Noerr
> CTO, Museglobal
> www.museglobal.com
>
>
>> -Original Message-
>> From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of
>> Walker, David
>> Sent: Tuesday, May 18, 2010 12:44 PM
>> To: CODE4LIB@LISTSERV.ND.EDU
>> Subject: Re: [CODE4LIB] OASIS SRU and CQL, access to most-current
>> drafts
>>
>>
>>> in order to provide decent user experience you need to be
>>> able to present some results "sooner" than others.
>>>
>> I would actually question whether this is really necessary.
>>
>> A few years back, I did a big literature review on metasearch, as well
>> as a looked at a good number of usability studies that libraries did
>> with metasearch systems.
>>
>> One thing that stood to me out was that the literature (written by
>> librarians and technologists) was very concerned about the slow search
>> times of metasearch, often seeing it as a deal-breaker.
>>
>> And yet, in the usability studies, actual students and faculty were far
>> less concerned about the search times -- within reason, of course.
>>
>> I thought the UC Santa Cruz study [1] summarized the point well: "Users
>> are willing to wait as long as they think that they will get useful
>> results. Their perceptions of time depend on this belief."
>>
>> Trying to return the results of a metasearch quickly just for the sake
>> of returning them quickly I think introduces other problems (in terms
>> of relevance ranking and presentation) that do far more to negatively
>> impact the user experience.  Just my opinion, of course.
>>
>> --Dave
>>
>> [1]
>> http://www.cdlib.org/services/d2d/metasearch/docs/core_ucsc_oct2004usab
>> ility.pdf
>>
>> ==
>> David Walker
>> Library Web Services Manager
>> California State University
>> http://xerxes.calstate.edu
>> 
>> From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Kuba
>> [skoc...@gmail.com]
>> Sent: Tuesday, May 18, 2010 9:57 AM
>> To: CODE4LIB@LISTSERV.ND.EDU
>> Subject: Re: [CODE4LIB] OASIS SRU and CQL, access to most-current
>> drafts
>>
>> That is quite unfortunate, as we were looking at SRU 2.0 as a possible
>> candidate for the front-end protocol for Index Data's pazpar2. The
>> main problem with federate/broadcast/meta (however you want to call it
>> ;) searching is that the back-end databases are scattered in different
>> locations or simply slow in their response times and in order to
>> provide decent user experience you need to be able to present some
>> results "sooner" than others. Waiting for the slowest database to
>> respond is usually not an option.
>>
>> On Tue, May 18, 2010 at 5:24 PM, Ray Denenberg, Library of Congress
>>  wrote:
>>
>>> On 18 May 2010 15:24, Ray Denenberg, Library of Congress
>>>
>> 
>>
>>> wrote:
>>>
>>>> There is no synchronous operation in SRU.
>>>>
>>> Sorry, meant to say "no asynchronous .
>>>
>>> --Ray
>>>
>>>
>>
>> --
>>
>> Cheers,
>> Jakub
>>


Re: [CODE4LIB] OASIS SRU and CQL, access to most-current drafts

2010-05-18 Thread Walker, David
> in order to provide decent user experience you need to be 
> able to present some results "sooner" than others. 

I would actually question whether this is really necessary.

A few years back, I did a big literature review on metasearch, as well as a 
looked at a good number of usability studies that libraries did with metasearch 
systems.

One thing that stood to me out was that the literature (written by librarians 
and technologists) was very concerned about the slow search times of 
metasearch, often seeing it as a deal-breaker.

And yet, in the usability studies, actual students and faculty were far less 
concerned about the search times -- within reason, of course.

I thought the UC Santa Cruz study [1] summarized the point well: "Users are 
willing to wait as long as they think that they will get useful results. Their 
perceptions of time depend on this belief."

Trying to return the results of a metasearch quickly just for the sake of 
returning them quickly I think introduces other problems (in terms of relevance 
ranking and presentation) that do far more to negatively impact the user 
experience.  Just my opinion, of course.

--Dave

[1] 
http://www.cdlib.org/services/d2d/metasearch/docs/core_ucsc_oct2004usability.pdf

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Kuba 
[skoc...@gmail.com]
Sent: Tuesday, May 18, 2010 9:57 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] OASIS SRU and CQL, access to most-current drafts

That is quite unfortunate, as we were looking at SRU 2.0 as a possible
candidate for the front-end protocol for Index Data's pazpar2. The
main problem with federate/broadcast/meta (however you want to call it
;) searching is that the back-end databases are scattered in different
locations or simply slow in their response times and in order to
provide decent user experience you need to be able to present some
results "sooner" than others. Waiting for the slowest database to
respond is usually not an option.

On Tue, May 18, 2010 at 5:24 PM, Ray Denenberg, Library of Congress
 wrote:
> On 18 May 2010 15:24, Ray Denenberg, Library of Congress 
> wrote:
>> There is no synchronous operation in SRU.
>
> Sorry, meant to say "no asynchronous .
>
> --Ray
>



--

Cheers,
Jakub


Re: [CODE4LIB] OASIS SRU and CQL, access to most-current drafts

2010-05-18 Thread Walker, David
> What communities?

I thought Peter Noerr, in the thread from last year, did a good job of 
explaining how "metasearch" and "federated search" have come to be adopted by 
different communities:

   http://www.mail-archive.com/code4lib@listserv.nd.edu/msg05188.html

I agree that "broadcast search" and "aggregated index" are clear enough to 
distinguish between the two.  

But I suspect that "aggregated index" is a little too technical in its 
orientation.  I don't see people outside of library programmers using that 
term.  Instead even more vague and confusing terms like "discovery systems" 
seem to be making headway.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Mike Taylor 
[m...@indexdata.com]
Sent: Tuesday, May 18, 2010 8:19 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] OASIS SRU and CQL, access to most-current drafts

On 18 May 2010 15:52, Jonathan Rochkind  wrote:
> What terms do you suggest, Mike?

"First, do no harm."

The current situation with federated/meta/broadcast search is
certainly unfortunate; but introducing yet a fourth term to mean the
same thing is not going to make things better.

> I think we're doomed no matter what [...]

I think you should have finished your message there :-)

> [...] with these, after certain communities
> started to use "federated search" and "metasearch" in directly opposite
> ways.

What communities?  Maybe we on the CODE4LIB list collectively carry
enough weight that we could take the most prevalent meanings and
propagate them?

> I also was told recently that what is called an "accordion" in English is
> called a "bandoneon" in Spanish, and what is called a "accordeon" in Spanish
> is called a "bandoneon" in English.

For what it's worth, and I say this as a fully paid-up Englishman, I
have never heard of a bandoneon.


>
> Hope this helps.
>
> Mike Taylor wrote:
>>
>> On 18 May 2010 15:24, Ray Denenberg, Library of Congress 
>> wrote:
>>
>>>
>>> There is no synchronous operation in SRU.
>>>
>>> As for federated  search .
>>>
>>> To digress a moment, you may recall -- I believe it was on this list --
>>> there was discussion (maybe a year ago?) of what that even means and
>>> whether
>>> it is the same or differs from metasearch, whatever that means.  That
>>> discussion was inconclusive.  Anyway, earlier drafts of SRU 2.0  describe
>>> a
>>> metasearch model.  Recently, the committee decided that the terms
>>> "metasearch" and "federated search" are undefined jargon.  We now choose
>>> to
>>> call it "multi-server search".
>>>
>>
>> Way to go.  Introducting yet ANOTHER synonym can only help!
>>
>> (And don't forget "broadcast search".)
>>
>>
>
>


Re: [CODE4LIB] A call for your OPAC (or other system) statistics! (Browse interfaces)

2010-05-04 Thread Walker, David
Here are some stats from Cal State San Marcos for the past 6 1/2 years 
(2003-10) .  All searches other than keyword are browse searches.

  keyword = 596,111
  title = 158,761
  author = 59,293
  subject = 23,692
  call number = 9,477  
  form / genre = 4,838
  other numbers = 14,636

So:
 
  keyword = 596,111
  browse = 270,697

These stats only tracked searches that were performed from the catalog home 
page [1] or that of the library website [2].  Any subsequent searches performed 
inside the catalog itself are not counted here.

I'm not sure if this is really showing that a browse display is popular here, 
though.  I suspect a good number of users (other than librarians) were 
expecting the title and author searches to behave like the keyword search.  But 
those options are browse searches, so they generate hits in favor of the browse.

--Dave

[1] http://library.csusm.edu/catalog/
[2] http://biblio.csusm.edu/

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Bill Dueber 
[b...@dueber.com]
Sent: Monday, May 03, 2010 11:08 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] A call for your OPAC (or other system) statistics! (Browse 
interfaces)

I got email from a person today saying, and I quote,

 "I must say that [the lack of a browse interface] come as a shock (*which
interface cannot browse??*)"

[Emphasis mine]

Here, a "browse interface" is one where you can get a giant list of all the
titles/authors/subjects whatever -- a view on the data devoid of any
searching.

Will those of you out there with "browse interfaces" in your system take a
couple minutes to send along a guesstimate of what percentage of patron
sessions involve their use?

[Note that for right now, I'm excluding "type-ahead" search boxes although
there's an obvious and, in my mind, strong argument to be made that they're
substantially similar for many types of data]

We don't have a browse interface on our (VuFind) OPAC right now. But in the
interest of paying it forward, I can tell you that in Mirlyn, our OPAC, has
numbers like this:

Pct of Mirlyn sessions, Feb/March/April 2010, which included at least one
basic
search and also:

  Go to full record view  46% (we put a lot of info in search results)
  Select/"favorite" an item   15%
  Add a facet:13%
  Export record(s)
   to email/refworks/RIS/etc. 3.4%
  Send to phone (sms) 0.21%
  Click on faq/help/AskUs
 in footer0.17%  (324 total)

Based on 187,784 sessions, 2010.02.01 to 2010.04.31

So...anyone out there able to tell me anything about browse interfaces?

--
Bill Dueber
Library Systems Programmer
University of Michigan Library


Re: [CODE4LIB] Twitter annotations and library software

2010-04-29 Thread Walker, David
> We're using maybe 1% of the spec for 99% of our practice, 
> probably because librarians weren't imaginative (as Jim 
> Weinheimer would say) enough to think of other use cases 
> beyond that most pressing one.

I would suggest it's more because, once you step outside of the primary use 
case for OpenURL, you end-up bumping into *other* standards.

 Dorthea'sblog post that Jakob referenced in his message is a good example of 
that.  She was trying to use OpenURL (via COINS) to get data into Zotero.  
Mid-way through the post she wonders if maybe she should have gone with unAPI 
instead.  

And, in fact, I think that would have been a better approach.  unAPI is better 
at doing that particular task than OpenURL.  And I think that may explain why 
OpenURL hasn't become the One Standard to Rule Them All, even though it kind of 
presents itself that way.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of MJ Suhonos 
[...@suhonos.ca]
Sent: Thursday, April 29, 2010 5:17 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Twitter annotations and library software

Okay, I know it's cool to hate on OpenURL, but I feel I have to clarify a few 
points:

> OpenURL is of no use if you seperate it from the existing infrastructure 
> which is mainly held by companies. No sane person will try to build an open 
> alternative infrastructure because OpenURL is a crapy library-standard like 
> MARC etc.

OpenURL is mostly implemented by libraries, yes, but it isn't necessarily 
*just* a library standard - this is akin to saying that Dublin Core is a 
library standard.  Only sort of.

The other issue I have is that — although Jonathan used the term to make a 
point — OpenURL is *not* an infrastructure, it is a protocol.  Condemning the 
current OpenURL infrastructure (which is mostly a vendor-driven oligopoly) is 
akin to saying in 2004 that HTTP and HTML sucks because Firefox hadn't been 
released yet and all we had was IE6.  Don't condemn the standard because of the 
implementation.

> The OpenURL specification is a 119 page PDF - that alone is a reason to run 
> away as fast as you can.

The main reason for this is because OpenURL can do much, much, much more than 
the simple "resolve a unique copy" use case that libraries use it for.  We're 
using maybe 1% of the spec for 99% of our practice, probably because librarians 
weren't imaginative (as Jim Weinheimer would say) enough to think of other use 
cases beyond that most pressing one.

I'd contend that OpenURL, like other technologies ( XML) is greatly 
misunderstood, and therefore abused, and therefore discredited.  I think there 
is also often confusion between the KEV schemas and OpenURL itself (which is 
really what Dorothea's blog rant is about); I'm certainly guilty of this 
myself, as Jonathan can attest.

You don't *have* to use the KEVs with OpenURL, you can use anything, including 
eg. Dublin Core.

> If a twitter annotation setup wants to get adopted than it should not be 
> build on a crapy complex library standard like OpenURL.

I don't quite understand this (but I think I agree) — twitter annotation should 
be built on a data model, and then serialized via whatever protocols make sense 
(which may or may not include OpenURL).

> I must admit that this solution is based on the open assumption that CSL 
> record format contains all information needed for OpenURL which may not the 
> case.
> …

A good example.  And this is where you're exactly right that we need better 
tools, namely OpenURL resolvers which can do much more than they do now.  I've 
had the idea for a number of years now that OpenURL functionality should be 
merged into aggregation / discovery layer (eg. OAI harvester)-type systems, 
because, like OAI-PMH, OpenURL can *transport metadata*, we just don't use it 
for that in practice.

A ContextObject is just a triple that makes a single assertion about two 
entities (resources): that A "references" B.  Just like an RDF statement using 
, but with more focus on describing the 
entities rather than the assertion.

Maybe if I put it that way, OpenURL sounds a little less crappy.

MJ


Re: [CODE4LIB] Twitter annotations and library software

2010-04-28 Thread Walker, David
I was also just working on DOI with RIS.

It looks like both Endnote and Refworks recognize 'DO' for DOIs.  But 
apparently Zotero does not.  If Zotero supported it, I'd say we'd have a de 
facto standard on our hands.

In fact, I couldn't figure out how to pass a DOI to Zotero using RIS.  Or, at 
least, in my testing I never saw the DOI show-up in Zotero.  I don't really use 
Zotero, so I may have missed it.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Owen Stephens 
[o...@ostephens.com]
Sent: Wednesday, April 28, 2010 2:26 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Twitter annotations and library software

We've had problems with RIS on a recent project. Although there is a
specification (http://www.refman.com/support/risformat_intro.asp), it is (I
feel) lacking enough rigour to ever be implemented consistently. The most
common issue in the wild that I've seen is use of different tags for the
same information (which the specification does not nail down enough to know
when each should be used):

Use of TI or T1 for primary title
Use of AU or A1 for primary author
Use of UR, L1 or L2 to link to 'full text'

Perhaps more significantly the specification doesn't include any field
specifically for a DOI, but despite this EndNote (owned by ISI ResearchSoft,
who are also responsible for the RIS format specification) includes the DOI
in a DO field in its RIS output - not to specification.

Owen

On Wed, Apr 28, 2010 at 9:17 AM, Jakob Voss  wrote:

> Hi
>
> it's funny how quickly you vote against BibTeX, but at least it is a format
> that is frequently used in the wild to create citations. If you call BibTeX
> undocumented and garbage then how do you call MARC which is far more
> difficult to make use of?
>
> My assumption was that there is a specific use case for bibliographic data
> in twitter annotations:
>
> I. Identifiy publication => this can *only* be done seriously with
> identifiers like ISBN, DOI, OCLCNum, LCCN etc.
>
> II. Deliver a citation => use a citation-oriented format (BibTeX, CSL, RIS)
>
> I was not voting explicitly for BibTeX but at least there is a large
> community that can make use of it. I strongly favour CSL (
> http://citationstyles.org/) because:
>
> - there is a JavaScript CSL-Processor. JavaScript is kind of a punishment
> but it is the natural environment for the Web 2.0 Mashup crowd that is going
> to implement applications that use Twitter annotations
>
> - there are dozens of CSL citation styles so you can display a citation in
> any way you want
>
> As Ross pointed out RIS would be an option too, but I miss the easy open
> source tools that use RIS to create citations from RIS data.
>
> Any other relevant format that I know (Bibont, MODS, MARC etc.) does not
> aim at identification or citation at the first place but tries to model the
> full variety of bibliographic metadata. If your use case is
>
> III. Provide semantic properties and connections of a publication
>
> Then you should look at the Bibliographic Ontology. But III does *not*
> "just subsume" usecase II. - it is a different story that is not beeing told
> by normal people but only but metadata experts, semantic web gurus, library
> system developers etc. (I would count me to this groups). If you want such
> complex data then you should use other systems but Twitter for data exchange
> anyway.
>
> A list of CSL metadata fields can be found at
>
> http://citationstyles.org/downloads/specification.html#appendices
>
> and the JavaScript-Processor (which is also used in Zotero) provides more
> information for developers: http://groups.google.com/group/citeproc-js
>
> Cheers
> Jakob
>
> P.S: An example of a CSL record from the JavaScript client:
>
> {
> "title": "True Crime Radio and Listener Disenchantment with Network
> Broadcasting, 1935-1946",
>  "author": [ {
>"family": "Razlogova",
>"given": "Elena"
>  } ],
>  "container-title": "American Quarterly",
>  "volume": "58",
>  "page": "137-158",
>  "issued": { "date-parts": [ [2006, 3] ] },
>  "type": "article-journal"
>
> }
>
>
> --
> Jakob Voß , skype: nichtich
> Verbundzentrale des GBV (VZG) / Common Library Network
> Platz der Goettinger Sieben 1, 37073 Göttingen, Germany
> +49 (0)551 39-10242, http://www.gbv.de
>



--
Owen Stephens
Owen Stephens Consulting
Web: http://www.ostephens.com
Email: o...@ostephens.com


Re: [CODE4LIB] Code4Lib North planning continues

2010-04-08 Thread Walker, David
I'm not on that conference list, so don't really know how much traffic it gets. 
 

But it seems to me that, since these regional conferences are mostly being held 
at different times of the year from the main conference, the overlap would be 
minimal.

Or not.  I don't know.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of William Denton 
[...@pobox.com]
Sent: Thursday, April 08, 2010 7:45 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Code4Lib North planning continues

On 8 April 2010, Walker, David quoted:

>> I think a good compromise is to have local meeting
>> conversations on the code4libcon google group.

That list is for organizing the main conference, with details about
getting rooms, food, shuttle buses, hotel booking agents, who can MC
Thursday afternoon, etc.  Mixing that with organizational details *and*
general discussion about all local chapter meetings would confuse
everything, I think.

Bill
--
William Denton, Toronto : miskatonic.org www.frbr.org openfrbr.org


Re: [CODE4LIB] Code4Lib North planning continues

2010-04-08 Thread Walker, David
> I think a good compromise is to have local meeting 
> conversations on the code4libcon google group.

this!

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Smith,Devon 
[smit...@oclc.org]
Sent: Thursday, April 08, 2010 6:35 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Code4Lib North planning continues

I think a good compromise is to have local meeting conversations on the 
code4libcon google group. It keeps the conversations in a central place 
initiallty created to faciliate face to face meetings.

/dev


-Original Message-
From: Code for Libraries on behalf of Ed Summers
Sent: Wed 4/7/2010 10:53 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Code4Lib North planning continues

On Wed, Apr 7, 2010 at 10:43 PM, William Denton  wrote:
> So far there are just three people with ideas for talks (me, Walter Lewis,
> Art Rhyno).  Have the other local chapters found it works well to have more
> time for informal stuff, or lightning talks, or "Ask Anything" like I see
> NYC is doing?  Sometimes with a smaller group people don't talk so much, but
> sometimes they do.

The thing that bums me out is that this discussion list was largely
created because there were all these discussions going on in niches
like xml4lib, web4lib, perl4lib, php4lib, oss4lib, etc ... and not
enough conversation about computing and libraries and
cross-fertilization between projects/environments.  Now we're seeing
the code4lib discussion list itself fragment into code4libmdc,
code4lib-north, code4libnyc, code4lib-northwest, etc.

I guess an argument could be made that the conversations going on in
this sublists would overwhelm code4lib proper with all sorts of local
noise. But I think ideally we should have crossed that bridge when we
came to it. I think if folks on code4lib saw what was going on in
different locales it would inspire people to do stuff where they are
too.

//Ed


Re: [CODE4LIB] newbie

2010-03-25 Thread Walker, David
Google code has project feeds in Atom, too.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Aaron 
Rubinstein [arubi...@library.umass.edu]
Sent: Thursday, March 25, 2010 10:21 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] newbie

On 3/25/2010 12:47 PM, Ross Singer wrote:

> I disagreed with this back in the day, and I still disagree with
> running our own code repository.  There are too many good code hosting
> solutions out there for this to be justifiable.  We used to run an SVN
> repo at code4lib.org, but we never bothered rebuilding it after our
> server got hacked.
>
> Actually I think GitHub/Google Code and their ilk are a much better
> solution -- especially for pastebins/gists/etc.  What would be useful,
> though, is an aggregation of the Code4lib's community spread across
> these sites, sort of what like the Planet does for blog postings, etc.
> or what Google Buzz does for the people I follow (i.e. I see their
> gists).
>
> I'd buy in to that (and help support it), but I'm not sure how one
> would go about it.
>
> -Ross.

I think the old discussion was looking more for a way to host code
snippets as opposed to version controlled projects, which I agree that
GitHub and the like already do nicely.  Would we really need more than a
code4lib.pastebin.com?  That being said, a code planet would be really
cool.  I know that GitHub and BitBucket publish ATOM feeds of a user's
activity but I'm not so sure about other code hosting sites.

Anyways, just a thought...

Aaron


Re: [CODE4LIB] ignore my last message

2010-03-08 Thread Walker, David
That was not a reply but a new message.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Mike Taylor 
[m...@indexdata.com]
Sent: Monday, March 08, 2010 9:14 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] ignore my last message

On 8 March 2010 17:04, Jonathan Rochkind  wrote:
> As usual, I'm great at sending the WORST messages to the wrong list. My
> email client is messing up all over.  Please do not reply to that one on
> list, please ignore it, and Eric please remove it from teh archives is
> possible.
>
> Man, today is not my day. I've got to stop using email for a year or
> something.

This kind of thing is always going to happen from time to time on a
list configuration to fail maximally hard when it fails at all.  See
Reply-To Munging Considered Harmful:
http://www.unicom.com/pw/reply-to-harmful.html


Re: [CODE4LIB] Transferring an article bib data From Article Linker page to ILL form

2010-03-03 Thread Walker, David
Sarah, 

Are you using ILLiad, or perhaps just an e-mailed based ILL form?

Either way, your link resolver should be able to send the OpenURL to that 
system.  ILLiad can accept OpenURLs and auto-populate it's form.  It's not too 
difficult to do the same for a home-grown email ILL form.

This is usually a pretty standard feature for any link resolver.  I would 
contact Serial Solutions to see what they offer in this regard.

--Daave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Park,Go-Woon 
[gop...@nwmissouri.edu]
Sent: Wednesday, March 03, 2010 2:07 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] Transferring an article bib data From Article Linker page 
to ILL form

I am wondering if anybody already has solutions for automated
interlibrary loan form from a Article Linker or an OpenURL resolver
page.



We have Serial Solutions' 367 Link. I would like to have one button that
copies the contents of the Article Linker result and pastes into our ILL
form when no full-text article is found in other databases. It is
painful to retype all information.



Any suggestion is welcome.

Thank you,



Sarah G. Park

Web/Reference Librarian

B. D. Owens Library | Northwest Missouri State University

(660) 562-1534 | gop...@nwmissouri.edu


Re: [CODE4LIB] Code4Lib 2011 Proposals

2010-03-03 Thread Walker, David
> ALL of that said,  where are the San Diego gang 

la_jolla++

BigD?

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Walter Lewis 
[lew...@hhpl.on.ca]
Sent: Wednesday, March 03, 2010 11:20 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Code4Lib 2011 Proposals

On 3 Mar 10, at 9:52 AM, Julia Bauder wrote:

> Also, the farther north we go, the more likely that snow+airplane
> incompatibilities will foil speakers' (and attendees'!) travel plans at the
> last minute, which isn't fun for anyone.
>
> somewhere_out_of_nor'easter_and_lake_effect_range_in_february++

Actually there is a clear line (at least on the eastern half of the continent) 
where the further north you go, the *less* snow you got this.  Buffalo is 
trailing a number of places on the east coast in total snow accumulation and 
Toronto has been dusted a few times this winter, with nothing of real 
substance.  Detroit and Chicago were well below seasonal averages last time I 
checked.

ALL of that said,  where are the San Diego gang or the folks from Miami?

Walter
  who can only dream of pubs with open patios in February


Re: [CODE4LIB] Sunday in Asheville

2010-02-17 Thread Walker, David
The hockey game is on MSNBC.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Sean Hannan 
[shan...@jhu.edu]
Sent: Wednesday, February 17, 2010 11:58 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Sunday in Asheville

If I'm remembering correctly, NBC is opting to show Ice Dancing over the 
USA/Canada game.

Yay, NBC.

-Sean

On Feb 17, 2010, at 2:53 PM, David Fiander wrote:

> Seriously, are any other sports going to be broadcast during that time slot?
>
> On Wed, Feb 17, 2010 at 14:51, Julia Bauder  wrote:
>> Ooh!  Ooh!  I want to watch the hockey game!  (As long as y'all won't throw
>> things at me if I root for Canada)  They have an NHL team in North
>> Carolina--there have to be SOME hockey fans in the state.
>>
>> The Bier Garden is listed as a sports bar on Yelp, and their Web site says
>> they have 16 televisions -- I'm sure we can convince them to tune a measly
>> one TV to the hockey game.
>>
>> Julia
>>
>>
>>
>> *
>>
>> Julia Bauder
>>
>> Data Services Librarian
>>
>> Grinnell College Libraries
>>
>>  Sixth Ave.
>>
>> Grinnell, IA 50112
>>
>>
>>
>> 641-269-4431
>>
>>
>> On Wed, Feb 17, 2010 at 1:45 PM, Andrew Darby  wrote:
>>
>>> There's also the Canada/US Olympic men's hockey game on Sunday night
>>> at 7:30 EST.  Finding an establishment willing to turn it on might be
>>> a challenge, though . . . .
>>>
>>> On Wed, Feb 17, 2010 at 1:41 PM, Tania Fersenheim 
>>> wrote:
 I emailed them a few questions awhile ago at he...@monkpub.com and they
 answered within a few hours, from the address ba...@monkpub.com.
 They seem to have a decent non-Belgian tap list as well.

 Tania

 --
 Tania Fersenheim
 Manager of Library Systems

 Brandeis University
 Library and Technology Services

 415 South Street, (MS 017/P.O. Box 549110)
 Waltham, MA 02454-9110
 Phone: 781.736.4698
 Fax: 781.736.4577
 email: tan...@brandeis.edu

> -Original Message-
> From: Code for Libraries [mailto:code4...@listserv.nd.edu] On
> Behalf Of Doran, Michael D
> Sent: Wednesday, February 17, 2010 11:06 AM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] Sunday in Asheville
>
> Hi Mike,
>
>> the Thirsty Monk [1].  It's a half-mile from the conference
> hotel, so
>> it's easily walkable/stumbleable.
>>
>>  1. http://www.yelp.com/biz/thirsty-monk-pub-asheville
>
> The Yelp entry has their address being 50 Commerce St,
> Asheville, NC 28801.  However their website
> (http://www.monkpub.com/) has them at 92 Patton Ave,
> Asheville, NC 28801 (which is even closer to the conference
> hotel).  Google maps now has Hookah Joe's at the 50 Commerce
> St address, so perhaps the Thirsty Monk has moved.  They are
> not answering their phone (828-254-5470) this early, but I
> will try them later on to get clarification.
>
>> I hope to run into some of you folks there.  If you're into Belgian
>> beer and a different pub atmosphere, do join me.
>
> Belgian beer is my favorite, so I plan on going (even if you
> are going to be there -- just teasing!).  I didn't notice any
> Atomium on draft, though (previewing the beer menu is how I
> happened to notice the address discrepancy).
>
> -- Michael
>
> # Michael Doran, Systems Librarian
> # University of Texas at Arlington
> # 817-272-5326 office
> # 817-688-1926 mobile
> # do...@uta.edu
> # http://rocky.uta.edu/doran/
>
>
>> -Original Message-
>> From: Code for Libraries [mailto:code4...@listserv.nd.edu]
> On Behalf Of
>> Michael J. Giarlo
>> Sent: Wednesday, February 17, 2010 8:39 AM
>> To: CODE4LIB@LISTSERV.ND.EDU
>> Subject: [CODE4LIB] Sunday in Asheville
>>
>> Folks,
>>
>> We have a fabulous slate of social activities lined up for
> this year's
>> conference in Asheville (thanks to, well, y'all).  But those of you
>> arriving on Sunday will notice there are no planned outings that
>> night!  Oh noez!  Well, I'm planning to spend my post-dinner time at
>> the Thirsty Monk [1].  It's a half-mile from the conference
> hotel, so
>> it's easily walkable/stumbleable.
>>
>> I hope to run into some of you folks there.  If you're into Belgian
>> beer and a different pub atmosphere, do join me.
>>
>> -Mike
>>
>> P.S. If you'd like to reach me via phone, my number is: the NJ area
>> code beginning with seven, followed by the numerically lower Santa
>> Monica (CA) area code, followed by the sum of the prior
> value added to
>> the number of the beast, padded with one zero.
>>
>>  1. http://w

Re: [CODE4LIB] Sunday in Asheville

2010-02-17 Thread Walker, David
There is a Duke basketball game on then, and they do love them some college 
basketball in North Carolina.  

NASCAR should be over by 7:00. 

I think you're good.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of David Fiander 
[da...@fiander.info]
Sent: Wednesday, February 17, 2010 11:53 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Sunday in Asheville

Seriously, are any other sports going to be broadcast during that time slot?

On Wed, Feb 17, 2010 at 14:51, Julia Bauder  wrote:
> Ooh!  Ooh!  I want to watch the hockey game!  (As long as y'all won't throw
> things at me if I root for Canada)  They have an NHL team in North
> Carolina--there have to be SOME hockey fans in the state.
>
> The Bier Garden is listed as a sports bar on Yelp, and their Web site says
> they have 16 televisions -- I'm sure we can convince them to tune a measly
> one TV to the hockey game.
>
> Julia
>
>
>
> *
>
> Julia Bauder
>
> Data Services Librarian
>
> Grinnell College Libraries
>
>  Sixth Ave.
>
> Grinnell, IA 50112
>
>
>
> 641-269-4431
>
>
> On Wed, Feb 17, 2010 at 1:45 PM, Andrew Darby  wrote:
>
>> There's also the Canada/US Olympic men's hockey game on Sunday night
>> at 7:30 EST.  Finding an establishment willing to turn it on might be
>> a challenge, though . . . .
>>
>> On Wed, Feb 17, 2010 at 1:41 PM, Tania Fersenheim 
>> wrote:
>> > I emailed them a few questions awhile ago at he...@monkpub.com and they
>> > answered within a few hours, from the address ba...@monkpub.com.
>> > They seem to have a decent non-Belgian tap list as well.
>> >
>> > Tania
>> >
>> > --
>> > Tania Fersenheim
>> > Manager of Library Systems
>> >
>> > Brandeis University
>> > Library and Technology Services
>> >
>> > 415 South Street, (MS 017/P.O. Box 549110)
>> > Waltham, MA 02454-9110
>> > Phone: 781.736.4698
>> > Fax: 781.736.4577
>> > email: tan...@brandeis.edu
>> >
>> >> -Original Message-
>> >> From: Code for Libraries [mailto:code4...@listserv.nd.edu] On
>> >> Behalf Of Doran, Michael D
>> >> Sent: Wednesday, February 17, 2010 11:06 AM
>> >> To: CODE4LIB@LISTSERV.ND.EDU
>> >> Subject: Re: [CODE4LIB] Sunday in Asheville
>> >>
>> >> Hi Mike,
>> >>
>> >> > the Thirsty Monk [1].  It's a half-mile from the conference
>> >> hotel, so
>> >> > it's easily walkable/stumbleable.
>> >> >
>> >> >  1. http://www.yelp.com/biz/thirsty-monk-pub-asheville
>> >>
>> >> The Yelp entry has their address being 50 Commerce St,
>> >> Asheville, NC 28801.  However their website
>> >> (http://www.monkpub.com/) has them at 92 Patton Ave,
>> >> Asheville, NC 28801 (which is even closer to the conference
>> >> hotel).  Google maps now has Hookah Joe's at the 50 Commerce
>> >> St address, so perhaps the Thirsty Monk has moved.  They are
>> >> not answering their phone (828-254-5470) this early, but I
>> >> will try them later on to get clarification.
>> >>
>> >> > I hope to run into some of you folks there.  If you're into Belgian
>> >> > beer and a different pub atmosphere, do join me.
>> >>
>> >> Belgian beer is my favorite, so I plan on going (even if you
>> >> are going to be there -- just teasing!).  I didn't notice any
>> >> Atomium on draft, though (previewing the beer menu is how I
>> >> happened to notice the address discrepancy).
>> >>
>> >> -- Michael
>> >>
>> >> # Michael Doran, Systems Librarian
>> >> # University of Texas at Arlington
>> >> # 817-272-5326 office
>> >> # 817-688-1926 mobile
>> >> # do...@uta.edu
>> >> # http://rocky.uta.edu/doran/
>> >>
>> >>
>> >> > -Original Message-
>> >> > From: Code for Libraries [mailto:code4...@listserv.nd.edu]
>> >> On Behalf Of
>> >> > Michael J. Giarlo
>> >> > Sent: Wednesday, February 17, 2010 8:39 AM
>> >> > To: CODE4LIB@LISTSERV.ND.EDU
>> >> > Subject: [CODE4LIB] Sunday in Asheville
>> >> >
>> >> > Folks,
>> >> >
>> >> > We have a fabulous slate of social activities lined up for
>> >> this year's
>> >> > conference in Asheville (thanks to, well, y'all).  But those of you
>> >> > arriving on Sunday will notice there are no planned outings that
>> >> > night!  Oh noez!  Well, I'm planning to spend my post-dinner time at
>> >> > the Thirsty Monk [1].  It's a half-mile from the conference
>> >> hotel, so
>> >> > it's easily walkable/stumbleable.
>> >> >
>> >> > I hope to run into some of you folks there.  If you're into Belgian
>> >> > beer and a different pub atmosphere, do join me.
>> >> >
>> >> > -Mike
>> >> >
>> >> > P.S. If you'd like to reach me via phone, my number is: the NJ area
>> >> > code beginning with seven, followed by the numerically lower Santa
>> >> > Monica (CA) area code, followed by the sum of the prior
>> >> value added to
>> >> > the number of the beast, padded with one zero.
>> >> >
>> >> >  1. http://www.yelp.com/biz/thirsty-monk-p

Re: [CODE4LIB] change management system

2010-02-11 Thread Walker, David
What are you using for that ticketing system?

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Fleming, 
Declan [dflem...@ucsd.edu]
Sent: Thursday, February 11, 2010 11:52 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] change management system

Hi - it's primarily designed for things we develop.

We have a Change Management ticketing system following ITIL principles
that tracks change requests for anything in production, from working
apps we've developed, to III, to the public infestations, and even
account adds/moves/changes.

Tickets from this system will sometimes be moved into JIRA when they ask
for a change to something we've developed.

D

-Original Message-
From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of
Walker, David
Sent: Thursday, February 11, 2010 9:49 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] change management system

Hey Declan,

Does that process only apply to applications you develop yourselves?
How about the Innovative system, or open source applications developed
elsewhere?

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of
Fleming, Declan [dflem...@ucsd.edu]
Sent: Thursday, February 11, 2010 9:31 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] change management system

Hey Dave!  We need to go grab lunch sometime...

We use JIRA for our bug tracking and tracking feature requests (to some
extent).

UCSD Libraries IT has a strict Development/Operations split, with a weak
Test phase in the middle - weak because I don't have a QA or config
manager, and I'm teaching academics the processes I learned while
working in the software industry.

We follow a 2 week deploy process where Dev can submit any packages to
Ops every other Friday.  On Monday or Tuesday (depending on what's on
fire in Ops), these packages are then staged to a Test server that only
Ops has admin privs on.   If the project people have a test plan, they
have the rest of the week to say whether the package passes or not.  If
yes, we roll the package to production on the next Monday or Tuesday.
If not, we kick the package back to Dev and they do their fixes and unit
tests and wait for the next cycle.

This system keeps production (and thus, customers) from being thrashed
with not-quite-ready builds.  There is a lot of natural tension in our
system, especially with the lack of a QA manager, and most of the config
management being done by Ops.  We require a high degree of communication
between the Ops and Dev managers on dates, test pass/fail conditions,
code quality, process mgt, etc.  This can be a challenge as Ops and Dev
have different missions at times.

D

-Original Message-
From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of
Walker, David
Sent: Thursday, February 11, 2010 8:55 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] change management system

Thanks to everyone who responded.  The comments have been very helpful!

Is anyone using RT? [1]

Also, I'm curious how many academic libraries are following a formal
change management process?

By that, I mean: Do you maintain a strict separation between developers
and operations staff (the people who put the changes into production)?
And do you have something like a Change Advisory Board that reviews
changes before they can be put into production?

Just as background to these questions:

We've been asked to come-up with a change management procedure/system
for a variety of academic technology groups here that have not
previously had such (at least nothing formal).  But find the process
that the "business" (i.e., PeopleSoft ) folks here follow to be a bit
too elaborate for our purposes.  They use Remedy.

--Dave

[1] http://bestpractical.com/rt

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Mark A.
Matienzo [m...@matienzo.org]
Sent: Thursday, February 11, 2010 5:47 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] change management system

I'm inclined to say that any sort of tracking software could be used
for this - it's mostly an issue of creating sticking with policy
decisions about what the various workflow states are, how things
become triaged, etc. I believe if you define that up front, you could
find Trac or any other tracking/issue system adaptable to what you
want to do.

Mark A. Matienzo
Digital Archivist, Manuscripts and Archives
Yale University Library


Re: [CODE4LIB] change management system

2010-02-11 Thread Walker, David
Hey Declan,

Does that process only apply to applications you develop yourselves?  How about 
the Innovative system, or open source applications developed elsewhere?

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Fleming, 
Declan [dflem...@ucsd.edu]
Sent: Thursday, February 11, 2010 9:31 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] change management system

Hey Dave!  We need to go grab lunch sometime...

We use JIRA for our bug tracking and tracking feature requests (to some
extent).

UCSD Libraries IT has a strict Development/Operations split, with a weak
Test phase in the middle - weak because I don't have a QA or config
manager, and I'm teaching academics the processes I learned while
working in the software industry.

We follow a 2 week deploy process where Dev can submit any packages to
Ops every other Friday.  On Monday or Tuesday (depending on what's on
fire in Ops), these packages are then staged to a Test server that only
Ops has admin privs on.   If the project people have a test plan, they
have the rest of the week to say whether the package passes or not.  If
yes, we roll the package to production on the next Monday or Tuesday.
If not, we kick the package back to Dev and they do their fixes and unit
tests and wait for the next cycle.

This system keeps production (and thus, customers) from being thrashed
with not-quite-ready builds.  There is a lot of natural tension in our
system, especially with the lack of a QA manager, and most of the config
management being done by Ops.  We require a high degree of communication
between the Ops and Dev managers on dates, test pass/fail conditions,
code quality, process mgt, etc.  This can be a challenge as Ops and Dev
have different missions at times.

D

-Original Message-
From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of
Walker, David
Sent: Thursday, February 11, 2010 8:55 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] change management system

Thanks to everyone who responded.  The comments have been very helpful!

Is anyone using RT? [1]

Also, I'm curious how many academic libraries are following a formal
change management process?

By that, I mean: Do you maintain a strict separation between developers
and operations staff (the people who put the changes into production)?
And do you have something like a Change Advisory Board that reviews
changes before they can be put into production?

Just as background to these questions:

We've been asked to come-up with a change management procedure/system
for a variety of academic technology groups here that have not
previously had such (at least nothing formal).  But find the process
that the "business" (i.e., PeopleSoft ) folks here follow to be a bit
too elaborate for our purposes.  They use Remedy.

--Dave

[1] http://bestpractical.com/rt

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Mark A.
Matienzo [m...@matienzo.org]
Sent: Thursday, February 11, 2010 5:47 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] change management system

I'm inclined to say that any sort of tracking software could be used
for this - it's mostly an issue of creating sticking with policy
decisions about what the various workflow states are, how things
become triaged, etc. I believe if you define that up front, you could
find Trac or any other tracking/issue system adaptable to what you
want to do.

Mark A. Matienzo
Digital Archivist, Manuscripts and Archives
Yale University Library


Re: [CODE4LIB] change management system

2010-02-11 Thread Walker, David
Thanks to everyone who responded.  The comments have been very helpful!

Is anyone using RT? [1]

Also, I'm curious how many academic libraries are following a formal change 
management process?  

By that, I mean: Do you maintain a strict separation between developers and 
operations staff (the people who put the changes into production)?  And do you 
have something like a Change Advisory Board that reviews changes before they 
can be put into production?

Just as background to these questions: 

We've been asked to come-up with a change management procedure/system for a 
variety of academic technology groups here that have not previously had such 
(at least nothing formal).  But find the process that the "business" (i.e., 
PeopleSoft ) folks here follow to be a bit too elaborate for our purposes.  
They use Remedy.

--Dave

[1] http://bestpractical.com/rt

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Mark A. 
Matienzo [m...@matienzo.org]
Sent: Thursday, February 11, 2010 5:47 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] change management system

I'm inclined to say that any sort of tracking software could be used
for this - it's mostly an issue of creating sticking with policy
decisions about what the various workflow states are, how things
become triaged, etc. I believe if you define that up front, you could
find Trac or any other tracking/issue system adaptable to what you
want to do.

Mark A. Matienzo
Digital Archivist, Manuscripts and Archives
Yale University Library


[CODE4LIB] change management system

2010-02-10 Thread Walker, David
Can anyone here recommend an open source system for "change management"?  

Not version control, per se.  But the process of requesting, reviewing, and 
approving changes to production systems.  

Does Trac fit into this category?

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu


Re: [CODE4LIB] marc documentation?

2010-01-27 Thread Walker, David
Do you mean just the 'CONTENT DESIGNATOR HISTORY' at the bottom of each page?

  http://www.loc.gov/marc/bibliographic/bd4xx.html

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jonathan 
Rochkind [rochk...@jhu.edu]
Sent: Wednesday, January 27, 2010 11:59 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] marc documentation?

I know I've seen documetnation on the LC site before for since-abandoned
marc bib tags, like 400. But I can't for the life of me find it now
navigating around the website or googling. does anyone know where this is?

Jonathan


Re: [CODE4LIB] urldecode problem and CAS

2010-01-27 Thread Walker, David
So a user arrives at your app.  You see that they are not logged in, and so 
redirect them to the CAS server with a return URL back to your application.

Do you have an example of that URL?

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jimmy Ghaphery 
[jghap...@vcu.edu]
Sent: Wednesday, January 27, 2010 9:18 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] urldecode problem and CAS

CODE4LIB,

I'm looking for some urldecode help if possible. I have an app that gets
a call through a url which looks like this in order to pull up a
specific record:
http://../app.cfm?id=15

It is password protected and we have recently moved to CAS for
authentication. After it gets passed from CAS back to our server it
looks like this and tosses an error:
http://../app.cfm?id%3d15

The equals sign translated to %3d

Any ideas are appreciated.

thanks

-Jimmy


--
Jimmy Ghaphery
Head, Library Information Systems
VCU Libraries
http://www.library.vcu.edu
--


Re: [CODE4LIB] image maps + lightbox/thickbox/ibox/etc

2010-01-07 Thread Walker, David
I've taken to using Shadowbox:

  http://www.shadowbox-js.com/

Since one of the things you can bring-up in the box is any external web page, 
it might meet your need for an image map?

--Dave
==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Ken Irwin 
[kir...@wittenberg.edu]
Sent: Thursday, January 07, 2010 2:20 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] image maps + lightbox/thickbox/ibox/etc

Hi all,

Does anyone have an-AJAX "pop-up window" style tool that works with image maps? 
I'm thinking of something in the the lightbox, thickbox, ibox family. I've 
found a bunch of references to people online *looking* for this functionality, 
but no one finding it. Any ideas?

Thanks!
Ken


Re: [CODE4LIB] Online PHP course?

2010-01-05 Thread Walker, David
> what's the problem(s) with PHP?

I fear this thread may never end.

And I like PHP.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Thomas Krichel 
[kric...@openlib.org]
Sent: Tuesday, January 05, 2010 2:13 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Online PHP course?

  Joe Hourcle writes

> ps.  yes, I could've used this response as an opportunity to bash
> PHP ...  and I didn't, because they might be learning PHP to
> migrate it to something else.

  controversial ;-)

  what's the problem(s) with PHP?


  Cheers,

  Thomas Krichelhttp://openlib.org/home/krichel
http://authorclaim.org/profile/pkr1
   skype: thomaskrichel


Re: [CODE4LIB] ipsCA Certs

2009-12-17 Thread Walker, David
I see now that I'm looking at the intermediate certificate.  The root does 
expire in 2009.

Nevermind. :-)

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Walker, David
Sent: Thursday, December 17, 2009 1:40 PM
To: Code for Libraries
Subject: RE: [CODE4LIB] ipsCA Certs

Hi John,

I also got this email. We also recently installed an ipsCA wildcard cert for a 
test EZProxy install.

Looking at the details of our ipsCA wildcard certificate in Firefox, though, I 
can see the chain of certificates going up to the root ipsCA cert.

Firefox says that that root certificate -- ipsCA CLASEA1 Certificate Authority 
-- is good until 2025. I see the same thing in IE, Safari, and I assume every 
other browser I might check.

Do you see that too?

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of John Wynstra 
[john.wyns...@uni.edu]
Sent: Thursday, December 17, 2009 1:02 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] ipsCA Certs

Out of curiosity, did anyone else using ipsCA certs receive notification
that due to the coming expiration of their root CA (December 29,2009),
they would need a reissued cert under a new root CA?

I am uncertain as to how this new Root CA will become a part of the
browsers trusted roots without some type of user action including a
software upgrade, but the following library website instructions lead me
to believe that this is not going to be smooth.  http://bit.ly/53Npel

We are just about to go live with EZProxy in January with an ipsCA cert
issued a few months ago, and I am not about to do that if I have serious
browser support issue.


--
<><><><><><><><><><><><><><><><><><><>
John Wynstra
Library Information Systems Specialist
Rod Library
University of Northern Iowa
Cedar Falls, IA  50613
wyns...@uni.edu
(319)273-6399
<><><><><><><><><><><><><><><><><><><>


Re: [CODE4LIB] ipsCA Certs

2009-12-17 Thread Walker, David
Hi John,

I also got this email. We also recently installed an ipsCA wildcard cert for a 
test EZProxy install.

Looking at the details of our ipsCA wildcard certificate in Firefox, though, I 
can see the chain of certificates going up to the root ipsCA cert.  

Firefox says that that root certificate -- ipsCA CLASEA1 Certificate Authority 
-- is good until 2025. I see the same thing in IE, Safari, and I assume every 
other browser I might check.

Do you see that too?

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of John Wynstra 
[john.wyns...@uni.edu]
Sent: Thursday, December 17, 2009 1:02 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] ipsCA Certs

Out of curiosity, did anyone else using ipsCA certs receive notification
that due to the coming expiration of their root CA (December 29,2009),
they would need a reissued cert under a new root CA?

I am uncertain as to how this new Root CA will become a part of the
browsers trusted roots without some type of user action including a
software upgrade, but the following library website instructions lead me
to believe that this is not going to be smooth.  http://bit.ly/53Npel

We are just about to go live with EZProxy in January with an ipsCA cert
issued a few months ago, and I am not about to do that if I have serious
browser support issue.


--
<><><><><><><><><><><><><><><><><><><>
John Wynstra
Library Information Systems Specialist
Rod Library
University of Northern Iowa
Cedar Falls, IA  50613
wyns...@uni.edu
(319)273-6399
<><><><><><><><><><><><><><><><><><><>


Re: [CODE4LIB] character-sets for dummies?

2009-12-16 Thread Walker, David
> The names of which character-sets I might be working 
> with here

Depending on how you are getting the data out of your III system, and whether 
or not you've upgraded the system to Unicode, the catalog data is likely in the 
MARC-8 character set.

If you're looking to convert that data to UTF-8 (which I assume you would), 
then your best friend is a program from Index Data called yaz-marcdump, which 
comes with the Yaz toolkit.  It runs on Linux and Windows, and can be invoked 
from the command line or from scripts to quickly and painlessly convert your 
catalog data into UTF-8.

  http://www.indexdata.com/yaz

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Ken Irwin 
[kir...@wittenberg.edu]
Sent: Wednesday, December 16, 2009 9:02 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] character-sets for dummies?

Hi all,

I'm looking for a good source to help me understand character sets and how to 
use them. I pretty much know nothing about this - the whole world of Unicode, 
ASCII, octal, UTF-8, etc. is baffling to me.

My immediate issue is that I think I need to integrate data from a variety of 
character sets into one MySQL table - I expect I need some way to convert from 
one to another, but I don't really even know how to tell which data are in 
which format.

Our homegrown journal list (akin to SerialsSolutions) includes data ingested 
from publishers, vendors, the library catalog (III), etc. When I look at the 
data in emacs, some of it renders like this:
 Revista de Oncolog\303\255a  [slashes-and-digits instead of 
diacritics]
And other data looks more like:
 Revista de Música Latinoamericana[weird characters instead of diacritics]

My MySQL table is currently set up with the collation set to: utf8-bin , and 
the titles from the second category (weird characters display in emacs) render 
properly when the database data is output to the a web browser. The data from 
the former example (\###) renders as an "I don't know what character this is" 
placeholder in Firefox and IE.

So, can someone please point me toward any or all of the following?

· A good primer for understanding all of this stuff

· A method for converting all of my data to the same character set so 
it plays nicely in the database

· The names of which character-sets I might be working with here

Many thanks!

Ken


Re: [CODE4LIB] holdings standards/protocols

2009-11-16 Thread Walker, David
Innovative does too.

Like Ben mentioned with Voyager Z39.50, simply set the record type to 'OPAC' in 
your yaz client to get the holdings.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of B.C.Charlton 
[b.c.charl...@kent.ac.uk]
Sent: Monday, November 16, 2009 6:30 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] holdings standards/protocols

> Can anyone give me an idea if any/many/all (ILS) Z implementations
> have implemented the holdings information?
>
> Is there a way of testing this using a client such as yaz (e.g. a
> worked example of seeing holdings via Z)

Voyager certainly can - see example below.

I've also got some perl that pulls back opac-xml using the ZOOM module. If 
that's of any use, let me know off-list.

Ben

Z> open nemesis.kent.ac.uk:7090
Connecting...OK.
Sent initrequest.
Connection accepted by v3 target.
ID : 34
Name   : Voyager LMS - Z39.50 Server
Version: 2007.0.4
Options: search present
Elapsed: 0.698674

Z> base voyager
Z> format opac

Z> find 0714120766
Sent searchRequest.
Received SearchResponse.
Search was a success.
Number of hits: 1
records returned: 0
Elapsed: 0.030849

Z> show 1
Sent presentRequest (1+1).
Records: 1
[VOYAGER]Record type: OPAC
Record type: USmarc
00763cam  2200229 a 4500
001 318575
005 20080123143630.0
008 010720s1991xxkabc  eng
015$a 0527672 $a 0527673 $a 0527674 $a 0527675 $a 0686148 $a F210884
020$a 0714120766 (pbk.)
035$9 8000527672
050  4 $a N 5760
100 1  $a Walker, Susan.
245 10 $a Roman art / $c Susan Walker.
260$a London : $b British Museum Press for Trustees of the British Museum, 
$c 1991 $g (repr. 1994).
300$a 72 p. : $b ill. (some col.), col. maps ; $c 22 cm.
500$a Includes bibliographical references (p. 71) and index.
561$a Copy F210884 from the collection of Colin Renfrew.
650  0 $a Art, Roman.
710 2  $a British Museum.
990$a CL335
990$a CL609

Data holdings 0
typeOfRecord: x
encodingLevel: 3
receiptAcqStatus: 0
generalRetention: 8
completeness: 4
dateOfReport: 00
nucCode: TCTCOWL
localLocation: Templeman - Core Text Collection [1 Week Loan]
callNumber: N 5760
circulation 0
 availableNow: 1
 itemId: 442858
 renewable: 0
 onHold: 0
circulation 1
 availableNow: 1
 itemId: 800513
 renewable: 0
 onHold: 0
Data holdings 1
typeOfRecord: x
encodingLevel: 3
receiptAcqStatus: 0
generalRetention: 8
completeness: 4
dateOfReport: 00
nucCode: TMORD
localLocation: Templeman - Main Collection [Ordinary Loan]
callNumber: N 5760
circulation 0
 availableNow: 1
 itemId: 442855
 renewable: 0
 onHold: 0
 temporaryLocation: Medway - Tonbridge [Ordinary Loan]
circulation 1
 availableNow: 1
 itemId: 442856
 renewable: 0
 onHold: 0
 temporaryLocation: Medway - Tonbridge [Ordinary Loan]
circulation 2
 availableNow: 1
 itemId: 442857
 renewable: 0
 onHold: 0
 temporaryLocation: Medway - Tonbridge [Ordinary Loan]
Data holdings 2
typeOfRecord: x
encodingLevel: 4
receiptAcqStatus: 0
generalRetention: 8
completeness: 1
dateOfReport: 00
nucCode: XTONUKCORD
localLocation: Medway - Tonbridge [Ordinary Loan]
callNumber: N 5760
circulation 0
 availableNow: 1
 itemId: 692092
 renewable: 0
 onHold: 0
nextResultSetPosition = 2
Elapsed: 0.013220


Re: [CODE4LIB] activestate and marc::batch

2009-10-01 Thread Walker, David
I had the same problems with MARC:Charset and MARC:File:XML.  Had to compile 
them myself using Microsoft's nmake program.

Here's my less-than-ideal notes.  I may have missed some things at the end 
here, too.  Sorry, I switched to using yaz-dump instead, since that did what I 
wanted.

1. Install ActiveState Perl
2. Download and unpack 'nmake' from microsoft [1] drop files in perl/bin
3. Restart the computer (to get the env varirables set)
4. Lanuch ppm from the command line, install:
 * marc-record
 * xml-sax
 * xml-sax-expat
5. Run this from the command line to fix an oversight in the ppm install
 perl -MXML::SAX -e "XML::SAX->add_parser('XML::SAX::Expat')->save_parsers()"
6. Download and unpack MARC::Charset and MARC:File:XML
7. Go into each of the directories created for those and issue this from the 
command line:
 perl Makefile.pl
 nmake
 nmake test
 nmake install


Hope this is useful.

--Dave

[1]  http://johnbokma.com/perl/make-for-windows.html -- this page describes how 
to use it and provides a link to the MS site where you can download it (for 
free).

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Bryan Baldus 
[bryan.bal...@quality-books.com]
Sent: Thursday, October 01, 2009 12:21 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] activestate and marc::batch

On Thursday, October 01, 2009 2:02 PM, Eric Lease Morgan wrote:
>Specifically, once I install ActiveState Perl on a Windows computer, will I be 
>able to install MARC::Batch and all of its friends as well?

I have no problems running ActiveState Perl with the MARC::Batch/MARC::Record 
distribution set of modules. The only problems I've experienced were with 
MARC::Charset and MARC::File::XML. I haven't had a significant need to use 
these modules, but when I attempted to install them, if I recall correctly, I 
had trouble with the XML parsing modules/dependencies/set-up. Someone with more 
experience working with XML parsing modules would likely be more successful.

Also, when installing MARC::Lint and MARC::Errorchecks through ppm, I believe 
ppm may complain or fail to install one of the modules due to the overlapping 
MARC::Lint::CodeData module included with both.

After installing the MARC::* modules using ppm, I'll usually manually update 
all of the .pm files with the most recent copy (from SourceForge), placing each 
in its appropriate spot in C:\Perl\site\lib\MARC\. I believe this manual 
process may be part of the source of my problems with Charset and XML (though 
for those, I have attempted to follow the official/standard automated 
installation process).

I hope this helps,

Bryan Baldus
bryan.bal...@quality-books.com
eij...@cpan.org
http://home.inwave.com/eija


Re: [CODE4LIB] Library Website Redesign Info and Project Plans

2009-09-16 Thread Walker, David
My wife really likes "Web Redesign: Workflow that Works", by Kelly Goto & Emily 
Cotler.  

The second edition is called Web Redesign 2.0.

  http://www.web-redesign.com/
  http://www.worldcat.org/oclc/57641137

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jason 
Stirnaman [jstirna...@kumc.edu]
Sent: Wednesday, September 16, 2009 11:36 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Library Website Redesign Info and Project Plans

I just came across this yesterday:
http://johncrenshaw.net/blog/web-development-project-process-workflow/
Very high-level and usual systems design approach, but with some good
web-specific tips thrown in.


>>> Sean Hannan  09/16/09 10:20 AM >>>
We're currently in the middle of a library website redesign as well.
For the most part, we have framed our project using Jesse James
Garrett's The Elements of User Experience
(https://wiki.library.jhu.edu/download/attachments/30737/elements.pdf
).  It has been immensely helpful in plotting out our work from the
User Experience touchy-feely end to the Information Architecture to
the visual design and implementation.

-Sean

---
Sean Hannan
Web Developer
Sheridan Libraries
Johns Hopkins University

On Sep 16, 2009, at 10:52 AM, Rosalyn Metz wrote:

> Hi All,
>
> I'm about to embark on a library website redesign.  I've started
> thinking about creating a project plan, but I honestly don't know
> where to start.
>
> I saw this website redesign presentation Lorcan Dempsey tweeted about:
> http://www.ucd.ie/library/guides/powerpoint/rpan_ppt2/index.swf  And
> started thinking, I wonder if anyone else has similar slides or
> project plans or advice.  I of course asked the Google but I didn't
> really find any project plans.  (If you're curious what I did find,
> take a look here:
> http://delicious.com/rosy1280/library+website-redesign)
>
> I do of course realize that every library is different, but I'm hoping
> that any information you all might be able to provide could help get
> the juices flowing.
>
> Thanks for your help in advance.
> Rosalyn
>
> Rosalyn


Re: [CODE4LIB] EzProxy and recaptcha

2009-08-25 Thread Walker, David
So I return to the lists here somewhat sheepishly to admit that the problem was 
solved simply by adding the reCaptcha domain to our EZproxy stanza with the 
magic Javascript directives:

  DJ recaptcha.net
  HJ recaptcha.net

One of those must, I'm guessing, fetch the reCaptcha Javascript without an HTTP 
referrer, or possible even using the original vendor site's domain in the 
referrer, since that's the only way it will work.  Either way, solved our 
problem.

ezproxy++

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu
____
From: Walker, David
Sent: Tuesday, August 25, 2009 12:33 PM
To: CODE4LIB@LISTSERV.ND.EDU
Cc: web4...@webjunction.org
Subject: EzProxy and recaptcha

Casting a net far and wide on this, sorry.

We're using EZproxy to proxy a website that also happens to have reCaptcha on 
it.

I guess reCaptcha keys are tied to domain names, so when the Javascript is 
brought into the page via the  tag, it sees that the page is 
'proxy.example.edu' instead of 'www.vendorsite.com', and we end-up with an 
error from reCaptcha saying:

   This reCAPTCHA key isn't authorized for the given domain.

That all makes sense.  But can anyone fathom a workaround?

--Dave


==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu


Re: [CODE4LIB] EzProxy and recaptcha

2009-08-25 Thread Walker, David
I'm thinking this may be the only solution.  I will mention it to the vendor, 
Ryan, thanks!

--Dave
==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Wick, Ryan 
[ryan.w...@oregonstate.edu]
Sent: Tuesday, August 25, 2009 1:22 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] EzProxy and recaptcha

reCAPTCHA keys are tied to a domain name by default, but they also offer
global keys. From an admin page:

If you wish to use your key across a large number of domains (e.g., if
you are a hosting provider, OEM, etc.), select the global key option.
You may want to use a descriptive domain name such as
"global-key.mycompany.com"


Ryan Wick
Information Technology Consultant
Special Collections
Oregon State University Libraries
http://osulibrary.oregonstate.edu/specialcollections


-Original Message-
From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of
Walker, David
Sent: Tuesday, August 25, 2009 12:34 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] EzProxy and recaptcha

Casting a net far and wide on this, sorry.

We're using EZproxy to proxy a website that also happens to have
reCaptcha on it.

I guess reCaptcha keys are tied to domain names, so when the Javascript
is brought into the page via the  tag, it sees that the page
is 'proxy.example.edu' instead of 'www.vendorsite.com', and we end-up
with an error from reCaptcha saying:

   This reCAPTCHA key isn't authorized for the given domain.

That all makes sense.  But can anyone fathom a workaround?

--Dave


==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu


Re: [CODE4LIB] EzProxy and recaptcha

2009-08-25 Thread Walker, David
> Is this something that can be done using 
> Find/Replace with the ^A modifier?

I don't think so, after reading the documentation.  But thank you for those 
links, Albert, I really appreciate it.

I think the ultimate issue is that, when the browser fetches the recaptcha 
Javascript, it sends a referrer that says this is from my proxy server instead 
of the vendor site.

So, unless EZProxy is set-up to manipulate the HTTP referrer header -- which 
I'm thinking is unlikely if not impossible -- then it's not something we can 
fix on the EZproxy side.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Bertram, 
Albert [bertr...@umich.edu]
Sent: Tuesday, August 25, 2009 1:08 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] EzProxy and recaptcha

Hi Dave,

Is this something that can be done using Find/Replace with the ^A modifier?

Find NAME="_PRIORREFERER" VALUE="http://
Replace NAME="_PRIORREFERER" VALUE="http://^A

The documentation says it only works after http:// or https://,  so it may not 
work if you're only passing the hostname around.

http://pluto.potsdam.edu/ezproxywiki/index.php/Find_And_Replace
http://www.oclc.org/us/en/support/documentation/ezproxy/cfg/find/
http://www.oclc.org/support/documentation/ezproxy/db/lexisnexis.htm

Cheers,

Albert


From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Walker, David 
[dwal...@calstate.edu]
Sent: Tuesday, August 25, 2009 3:33 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] EzProxy and recaptcha

Casting a net far and wide on this, sorry.

We're using EZproxy to proxy a website that also happens to have reCaptcha on 
it.

I guess reCaptcha keys are tied to domain names, so when the Javascript is 
brought into the page via the  tag, it sees that the page is 
'proxy.example.edu' instead of 'www.vendorsite.com', and we end-up with an 
error from reCaptcha saying:

   This reCAPTCHA key isn't authorized for the given domain.

That all makes sense.  But can anyone fathom a workaround?

--Dave


==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu


[CODE4LIB] EzProxy and recaptcha

2009-08-25 Thread Walker, David
Casting a net far and wide on this, sorry.

We're using EZproxy to proxy a website that also happens to have reCaptcha on 
it.  

I guess reCaptcha keys are tied to domain names, so when the Javascript is 
brought into the page via the  tag, it sees that the page is 
'proxy.example.edu' instead of 'www.vendorsite.com', and we end-up with an 
error from reCaptcha saying:

   This reCAPTCHA key isn't authorized for the given domain.

That all makes sense.  But can anyone fathom a workaround?

--Dave


==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu


Re: [CODE4LIB] tricky mod_rewrite

2009-07-01 Thread Walker, David
> How can I write an .htaccess that's path-independent 
> if I like to exclude certain files in that directory, 
> such as index.html?

This is what the Zend Framework uses.  I think it's pretty clever:

  RewriteCond %{REQUEST_FILENAME} !-f
  RewriteCond %{REQUEST_FILENAME} !-d 

  RewriteRule ^.*$ index.php

It basically says that, if the request is for a real, physical file or 
directory within your application, then Apache should go ahead and serve it up 
directly.  If it's not, then go ahead and rewrite the request through your 
script (index.php).

--Dave


==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Godmar Back 
[god...@gmail.com]
Sent: Wednesday, July 01, 2009 8:47 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] tricky mod_rewrite

On Wed, Jul 1, 2009 at 10:38 AM, Walker, David  wrote:

> > They can create .htaccess files, but don't always
> > have control of the main Apache httpd.conf or the
> > root directory.
>
> Just to be clear, I didn't mean just the root directory itself.  If
> .htacess lives within a sub-directory of the Apache root, then you _don't_
> need RewriteBase.
>
> RewriteBase is only necessary when you're in a virtual directory, which is
> physically located outside of Apache's DocumentRoot path.
>
> Correct me if I'm wrong.
>

You are correct!  If I omit the RewriteBase, it still works in this case.

Let's have some more of that sendmail koolaid and up the challenge.

How can I write an .htaccess that's path-independent if I like to exclude
certain files in that directory, such as index.html?  So far, I've been
doing:

RewriteCond %{REQUEST_URI} !^/services/tictoclookup/standalone/index.html

To avoid running my script for index.html.  How would I do that?  (Hint: the
use of SERVER variables on the right-hand side in the CondPattern of a
RewriteCond is not allowed, but some trickery may be possible, according to
http://www.issociate.de/board/post/495372/Server-Variables_in_CondPattern_of_RewriteCond_directive.html)

 - Godmar


Re: [CODE4LIB] tricky mod_rewrite

2009-07-01 Thread Walker, David
> They can create .htaccess files, but don't always 
> have control of the main Apache httpd.conf or the 
> root directory.

Just to be clear, I didn't mean just the root directory itself.  If .htacess 
lives within a sub-directory of the Apache root, then you _don't_ need 
RewriteBase.

RewriteBase is only necessary when you're in a virtual directory, which is 
physically located outside of Apache's DocumentRoot path.

Correct me if I'm wrong.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Godmar Back 
[god...@gmail.com]
Sent: Wednesday, July 01, 2009 7:23 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] tricky mod_rewrite

On Wed, Jul 1, 2009 at 10:18 AM, Walker, David  wrote:

> > Is it possible to write a .htaccess file that works
> > *no matter* where it is located
>
> I don't believe so.
>
> If the .htaccess file lives in a directory inside of the Apache root
> directory, then you _don't_ need to specify a RewriteBase.  It's really only
> necessary when .htacess lives in a virtual directory outside of the Apache
> root.
>

I see.

Unfortunately, that's the common deployment case by non-administrators (many
librarians). They can create .htaccess files, but don't always have control
of the main Apache httpd.conf or the root directory.

 - Godmar


Re: [CODE4LIB] tricky mod_rewrite

2009-07-01 Thread Walker, David
> Is it possible to write a .htaccess file that works 
> *no matter* where it is located

I don't believe so.

If the .htaccess file lives in a directory inside of the Apache root directory, 
then you _don't_ need to specify a RewriteBase.  It's really only necessary 
when .htacess lives in a virtual directory outside of the Apache root.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Godmar Back 
[god...@gmail.com]
Sent: Wednesday, July 01, 2009 6:20 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] tricky mod_rewrite

On Wed, Jul 1, 2009 at 9:13 AM, Peter Kiraly  wrote:

> From: "Godmar Back" 
>
>> is it possible to write this without hardwiring the RewriteBase in it?  So
>> that it can be used, for instance, in an .htaccess file from within any
>> /path?
>>
>
> Yes, you can put it into a .htaccess file, and the URL rewrite will
> apply on that directory only.
>

You misunderstood the question; let me rephrase it:

Can I write a .htaccess file without specifying the path where the script
will be located in RewriteBase?
For instance, consider
http://code.google.com/p/tictoclookup/source/browse/trunk/standalone/.htaccess
Here, anybody who wishes to use this code has to adapt the .htaccess file to
their path and change the "RewriteBase" entry.

Is it possible to write a .htaccess file that works *no matter* where it is
located, entirely based on where it is located relative to the Apache root
or an Apache directory?

 - Godmar


Re: [CODE4LIB] How to access environment variables in XSL

2009-06-19 Thread Walker, David
Micahael, 

What XSLT processor and programming language are you using?

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Doran, Michael 
D [do...@uta.edu]
Sent: Friday, June 19, 2009 12:44 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] How to access environment variables in XSL

I am working with some XSL pages that serve up HTML on the web.  I'm new to 
XSL.   In my prior web development, I was accustomed to being able to access 
environment variables (and their values, natch) in my CGI scripts and/or via 
Server Side Includes.  Is there an equivalent mechanism for accessing those 
environment variables within an XSL page?

These are examples of the variables I'm referring to:
SERVER_NAME
SERVER_PORT
HTTP_HOST
DOCUMENT_URI
REMOTE_ADDR
HTTP_REFERER

In a Perl CGI script, I would do something like this:
my $server = $ENV{'SERVER_NAME'};

Or in an SSI, I could do something like this:


If it matters, I'm working in: Solaris/Apache/Tomcat

I've googled this but not found anything useful yet (except for other people 
asking the same question).  Maybe I'm asking the wrong question.  Any help 
would be appreciated.

-- Michael

# Michael Doran, Systems Librarian
# University of Texas at Arlington
# 817-272-5326 office
# 817-688-1926 mobile
# do...@uta.edu
# http://rocky.uta.edu/doran/


Re: [CODE4LIB] FW: [CODE4LIB] openurl.info ?

2009-05-19 Thread Walker, David
>>> >> Admin FAX:
>>> >> Admin FAX Ext.:
>>> >> Admin Email:nis...@niso.org <mailto:email%3anis...@niso.org>
>>> >> Billing ID:DOT-132FHTD2SCKP
>>> >> Billing Name:Patricia Stevens
>>> >> Billing Organization:NISO - NATIONAL INFORMATION STANDARD ORGANIZATION
>>> >> Billing Street1:4733 Bethesda Ave.
>>> >> Billing Street2:
>>> >> Billing Street3:
>>> >> Billing City:Bethesda
>>> >> Billing State/Province:MD
>>> >> Billing Postal Code:20814
>>> >> Billing Country:BE
>>> >> Billing Phone:+32.3016542512
>>> >> Billing Phone Ext.:
>>> >> Billing FAX:
>>> >> Billing FAX Ext.:
>>> >> Billing Email:nis...@niso.org <mailto:email%3anis...@niso.org>
>>> >> Tech ID:DOT-IQIOP5LKRKM0
>>> >> Tech Name:Pat Stevens
>>> >> Tech Organization:NISO - NATIONAL INFORMATION STANDARD ORGANIZATION
>>> >> Tech Street1:4733 Bethesda Ave.
>>> >> Tech Street2:
>>> >> Tech Street3:
>>> >> Tech City:Bethesda
>>> >> Tech State/Province:MD
>>> >> Tech Postal Code:20814
>>> >> Tech Country:BE
>>> >> Tech Phone:+32.3016542512
>>> >> Tech Phone Ext.:
>>> >> Tech FAX:
>>> >> Tech FAX Ext.:
>>> >> Tech Email:nis...@niso.org <mailto:email%3anis...@niso.org>
>>> >> Name Server:DNS.OCLC.ORG <http://DNS.OCLC.ORG>
>>> >> Name Server:DNS2.OCLC.ORG <http://DNS2.OCLC.ORG>
>>> >> Name Server:
>>> >> Name Server:
>>> >> Name Server:
>>> >> Name Server:
>>> >> Name Server:
>>> >> Name Server:
>>> >> Name Server:
>>> >> Name Server:
>>> >> Name Server:
>>> >> Name Server:
>>> >> Name Server:
>>> >>
>>> >> On Fri, May 15, 2009 at 10:39 AM, Phil Adams  wrote:
>>>> >>> I heard via twitter that:
>>>> >>>
>>>> >>> "openurl.info <http://openurl.info>  domain name expired on sunday!
>>>> somebody
>>>> >>> messed up"
>>>> >>>
>>>> >>> Regards,
>>>> >>> Philip Adams
>>>> >>> Senior Assistant Librarian (Electronic Services Development)
>>>> >>> De Montfort University Library
>>>> >>> 0116 250 6397
>>>> >>>
>>>> >>> -Original Message-
>>>> >>> From: A discussion listserv for topics surrounding the Open URL NISO
>>>> >>> standard Z39.88. [mailto:open...@oclc.org] On Behalf Of Ray Denenberg,
>>>> >>> Library of Congress
>>>> >>> Sent: 15 May 2009 15:08
>>>> >>> To: open...@oclc.org
>>>> >>> Subject: Fw: [CODE4LIB] openurl.info <http://openurl.info>  ?
>>>> >>>
>>>> >>> What's happened to openurl.info <http://openurl.info> ?
>>>> >>>
>>>> >>> (This note below was posted to code4lib but that's probably not the
>>>> best
>>>> >>>
>>>> >>> forum for this question.)
>>>> >>>
>>>> >>> --Ray Denenberg
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>>
>>>> >>>
>>>>>>>> >>>>> >> From: "Walker, David" 
>>>>>>>> >>>>> >> Reply-To: "Code for Libraries "
>>>>>>>> >>>>> >> 
>>>>>>>> >>>>> >> Date: Thu, 14 May 2009 12:05:03 -0700
>>>>>>>> >>>>> >> To: "Code for Libraries "
>>>>>>>> >>>>> >> 
>>>>>>>> >>>>> >> Subject: [CODE4LIB] openurl.info <http://openurl.info>  ?
>>>>>>>> >>>>> >>
>>>>>>>> >>>>> >> It appears that the openurl.info <http://openurl.info>  domain
name has
>>>>>> >>>>> expired.  I get an
>>>> >>> error
>>>>>>>> >>>>> >> from
>>>>>>>> >>>>> >> the host:
>>>>>>>> >>>>> >>
>>>>>>>> >>>>> >>
>>>> >>> http://www.openurl.info/registry/docs/mtx/info:ofi/fmt:kev:mtx:ctx
>>>>>>>> >>>>> >>
>>>>>>>> >>>>> >> I've been using the registry at OCLC as a reference source for
>>>> >>> OpenURL.
>>>>>>>> >>>>> >> But
>>>>>>>> >>>>> >> all of the identifiers and links pointing to openurl.info
>>>>>> >>>>> <http://openurl.info>  no longer
>>>> >>> work.
>>>>>>>> >>>>> >>
>>>>>>>> >>>>> >>
>>>>>>>> http://alcme.oclc.org/openurl/servlet/OAIHandler?verb=ListSets
>>>>>>>> >>>>> >>
>>>>>>>> >>>>> >> Is there a different place I should be going now for OpenURL
info
>>>>>>>> >>>>> >> instead?  Or
>>>>>>>> >>>>> >> maybe this is just a snafu?
>>>>>>>> >>>>> >>
>>>>>>>> >>>>> >> --Dave
>>>>>>>> >>>>> >>
>>>>>>>> >>>>> >> ==
>>>>>>>> >>>>> >> David Walker
>>>>>>>> >>>>> >> Library Web Services Manager
>>>>>>>> >>>>> >> California State University
>>>>>>>> >>>>> >> http://xerxes.calstate.edu
>>>> >>>
>>> >>
>> >
>
>
>
> -- End of Forwarded Message
>
>

--


[CODE4LIB] openurl.info ?

2009-05-14 Thread Walker, David
It appears that the openurl.info domain name has expired.  I get an error from 
the host:

http://www.openurl.info/registry/docs/mtx/info:ofi/fmt:kev:mtx:ctx

I've been using the registry at OCLC as a reference source for OpenURL.  But 
all of the identifiers and links pointing to openurl.info no longer work.

   http://alcme.oclc.org/openurl/servlet/OAIHandler?verb=ListSets

Is there a different place I should be going now for OpenURL info instead?  Or 
maybe this is just a snafu?

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu


Re: [CODE4LIB] exact title searches with z39.50

2009-04-28 Thread Walker, David
I'm not sure it's a _big_ mess, though, at least for metasearching.

I was just looking at our metasearch logs this morning, so did a quick count: 
93% of the searches were keyword searches.  Not a lot of exactness required 
there.  It's mostly in the 7% who are doing more specific searches (author, 
title, subject) where the bulk if the problems lie, I suspect.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Ray Denenberg, 
Library of Congress [r...@loc.gov]
Sent: Tuesday, April 28, 2009 8:32 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] exact title searches with z39.50

Right, Mike. There is a long and rich history of the debate between loose
and strict interpretation, in the world at large, and in particular, within
Z39.50, this debate raged from the late 1980s throughout the 90s.  The
faction that said "If you can't give the client what is asks for, at least
give them something; make them happy" was almost religious in its zeal.
Those who said "If you can't give the client what it asks for, be honest
about it; give them good diagnostic information, tell them a better way to
formulate the request, etc. But don't pretend the transaction was a success
if it wasn't" was shouted down most every time.   I can't predict, but I'm
just hoping that lessons have been learned from the mess that that mentality
got us into.

--Ray

- Original Message -
From: "Mike Taylor" 
To: 
Sent: Tuesday, April 28, 2009 10:43 AM
Subject: Re: [CODE4LIB] exact title searches with z39.50


> Ray Denenberg, Library of Congress writes:
> > > The irony is that Z39.50 actually make _much_ more effort to
> > > specify semantics than most other standards -- and yet still
> > > finds itself in the situation where many implementations do not
> > > respond correctly to the BIB-1 attribute 6=3
> > > (completeness=complete field) which is how Eric should be able to
> > > do what he wants here.
> > >
> > > Not that I have any good answers to this problem ... but I DO
> > > know that inventing more and more replacement standards it NOT
> > > the answer.  Everything that's come along since Z39.50 has
> > > suffered from exactly the same problem but more so.
> >
> > I think this remains to be seen for SRU/CQL, in particular for the
> > example at hand, how to search for exact title.  There are two
> > related issues: one, how arcane the standard is, and two, how
> > closely implementations conform to the intended semantics. And
> > clearly the first has a bearing on the second.
> >
> > And even I would say that Z39.50 is a bit on the arcance side when
> > it comes to formulating a query for exact title. With SRU/CQL there
> > is an "exact" relation ('exact' in 1.1, '==' in 1.2).  So I would
> > think there is less excuse for a server to apply a creative
> > interpretation. If it cannot support "exact title" it should fail
> > the search.
>
> IMHO, this is where it breaks down 90% of the time.  Servers that
> can't do what they're asked should say "I can't do that", but -- for
> reasons that seem good at the time -- nearly no server fails requests
> that it can "sort of" fulfil.  Nine out of ten Z39.50 servers asked to
> do a whole-field search and which can't do it will instead do a word
> search, because "it's better to give the user SOMETHING".  I bet the
> same is true of SRU servers.  (I am as guilty as anyone else, I've
> written servers like that.)
>
> The idea that "it's better to give the user SOMETHING" might -- might
> -- have been true when we mostly used Z39.50 servers for interactive
> sessions.  Now that they are mostly used as targets in metasearching,
> that approach is disastrous.
>
> _/|_ ___
> /o ) \/  Mike Taylor
> http://www.miketaylor.org.uk
> )_v__/\  "I try to take one day at a time, but sometimes several days
> attack me at once" -- Ashleigh Brilliant.


Re: [CODE4LIB] Ebsco Discovery Service WAS: [CODE4LIB] Serials Solutions Summon

2009-04-24 Thread Walker, David
No, it's like Summon where they are also going out to publishers and other orgs 
to harvest and index their stuff, kinda following in what Google Scholar did.  
They'll be harvesting and indexing WorldCat, for example.

This is kinda the in thing, you know.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jonathan 
Rochkind [rochk...@jhu.edu]
Sent: Friday, April 24, 2009 12:58 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Ebsco Discovery Service WAS: [CODE4LIB] Serials 
Solutions Summon

That page is timing out for me.

Does EBSCO's service include only what is held by EBSCO in the first place?

(And I'm still pushing for "aggregated index" as what to call these
things! Thanks to whoever suggested that.)

Walker, David wrote:
> I see today that Ebsco announced their "Discovery Service," similar to 
> Summon.  Not surprising, of course -- although note the fact that WorldCat 
> will be included in the "local index", interesting, no?
>
>http://www.ebscohost.com/thisTopic.php?marketID=1&topicID=1245
>
> Anyway, nothing in the press release mentions an API.  Hopefully folks will 
> impress upon Ebsco the need for such access, as OCLC and Serial Solutions 
> have done for their systems. Ebsco does have an API for their basic platform, 
> so maybe it's just not mentioned.
>
> We've taken to calling these class of systems "super databases," for what 
> it's worth.
>
> --Dave
>
> ==
> David Walker
> Library Web Services Manager
> California State University
> http://xerxes.calstate.edu
> 
> From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Yitzchak 
> Schaffer [yitzc...@touro.edu]
> Sent: Wednesday, April 22, 2009 10:21 AM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] Serials Solutions Summon
>
> Jonathan Rochkind wrote:
>
>> It _would_ be great if SerSol would actually give you (if you were
>> subscribed) a feed of their harvested and normalized metadata, so you
>> could still pay them to collect and normalize it, but then use it for
>> your own purposes outside of Summon. I hope SerSol will consider this
>> down the line, if Summon is succesful.
>>
>
> This is available as a dump for their traditional holdings product,
> which makes it possible to do just this (i.e. use SerSol cleaned
> holdings/access info in a local system).  My working with SerSol has
> brought me to see them as essentially a great data aggregation service
> with some OK software bundled in.  We are looking ahead to possibly
> using this technique by loading their data into a local ERMS.
>
> Agreed that such an availability of data would be a great service with
> the Summon metadata as well.
>
> --
> Yitzchak Schaffer
> Systems Manager
> Touro College Libraries
> 33 West 23rd Street
> New York, NY 10010
> Tel (212) 463-0400 x5230
> Fax (212) 627-3197
> Email yitzc...@touro.edu
> Twitter /torahsyslib
>
>


[CODE4LIB] Ebsco Discovery Service WAS: [CODE4LIB] Serials Solutions Summon

2009-04-24 Thread Walker, David
I see today that Ebsco announced their "Discovery Service," similar to Summon.  
Not surprising, of course -- although note the fact that WorldCat will be 
included in the "local index", interesting, no?

   http://www.ebscohost.com/thisTopic.php?marketID=1&topicID=1245

Anyway, nothing in the press release mentions an API.  Hopefully folks will 
impress upon Ebsco the need for such access, as OCLC and Serial Solutions have 
done for their systems. Ebsco does have an API for their basic platform, so 
maybe it's just not mentioned.

We've taken to calling these class of systems "super databases," for what it's 
worth.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Yitzchak 
Schaffer [yitzc...@touro.edu]
Sent: Wednesday, April 22, 2009 10:21 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Serials Solutions Summon

Jonathan Rochkind wrote:
> It _would_ be great if SerSol would actually give you (if you were
> subscribed) a feed of their harvested and normalized metadata, so you
> could still pay them to collect and normalize it, but then use it for
> your own purposes outside of Summon. I hope SerSol will consider this
> down the line, if Summon is succesful.

This is available as a dump for their traditional holdings product,
which makes it possible to do just this (i.e. use SerSol cleaned
holdings/access info in a local system).  My working with SerSol has
brought me to see them as essentially a great data aggregation service
with some OK software bundled in.  We are looking ahead to possibly
using this technique by loading their data into a local ERMS.

Agreed that such an availability of data would be a great service with
the Summon metadata as well.

--
Yitzchak Schaffer
Systems Manager
Touro College Libraries
33 West 23rd Street
New York, NY 10010
Tel (212) 463-0400 x5230
Fax (212) 627-3197
Email yitzc...@touro.edu
Twitter /torahsyslib


Re: [CODE4LIB] Serials Solutions Summon

2009-04-21 Thread Walker, David
I've noticed that reference and instructional librarians (at least in published 
literature) tend to use the term "federated search" more often than others.  
And by that they mean a broadcast search, not what Ray and many others mean by 
that term.  

Library technology folk tend to use the other terms more often.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Ray Denenberg, 
Library of Congress [r...@loc.gov]
Sent: Tuesday, April 21, 2009 8:28 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Serials Solutions Summon

From: "Thomas Dowling" 
> You can define differences between meta-, federated, and broadcast search,
> but
> every discussion on the topic will be punctuated by people asking, "Wait,
> what's the difference again?"

Leaving aside metasearch and broadcast search (terms invented more recently)
it  is a shame if "federated" has really lost its distinction
from"distributed".  Historically, a federated database is one that
integrates multiple (autonomous) databases so it is in effect a virtual
distributed database, though a single database.I don't think that's a
hard concept and I don't think it is a trivial distinction.

--Ray


Re: [CODE4LIB] Serials Solutions Summon

2009-04-21 Thread Walker, David
Even though Summon is marketed as a Serial Solutions system, I tend to think of 
it more as coming from Proquest (the parent company, of course).

Summon goes a bit beyond what Proquest and CSA have done in the past, loading 
outside publisher data, your local catalog records, and some other nice data 
(no small thing, mind you).  But, like Rob and Mike, I tend to see this as an 
evolutionary step for a database aggregator like Proquest rather than a 
revolutionary one.

Obviously, database aggregators like Proquest, OCLC, and Ebsco are well 
positioned to do this kind of work.  The problem, though, is that they are also 
competitors.  At some point, if you want to have a truly unified local index of 
_all_ of your database, you're going to have to cross aggregator lines.  What 
happens then?

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Dr R. 
Sanderson [azar...@liverpool.ac.uk]
Sent: Tuesday, April 21, 2009 8:14 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Serials Solutions Summon

On Tue, 21 Apr 2009, Eric Lease Morgan wrote:
> On Apr 21, 2009, at 10:55 AM, Dr R. Sanderson wrote:
>> How is this 'new type' of index any different from an index of OAI-PMH
>> harvested material?  Which in turn is no different from any other
>> local search, just a different method of ingesting the data?

> This "new type" of index is not any different in functionality from a
> well-implemented OAI service provider with the exception of the type
> of content it contains.

Not even the type of content, just the source of the content.
Eg SS have come to an agreement with the publishers to use their
content, and they've stuffed it all in one big index with a nice
interface.

NTSH, Move Along...

Rob


Re: [CODE4LIB] Something completely different

2009-04-06 Thread Walker, David
> I know that a large percentage of the data in our 
> MARC records is not being used for finding/gathering 
> or even display, so in that case, what good is it?

This is, of course, a chicken and egg thing.  The reason why a lot of MARC data 
remains inconsistent is precisely because it is not being used for finding or 
display.

Anyone who has worked with a faceted search application has seen this in 
action.  As soon as you aggregate subject headings, genre designations, etc., 
into facets you begin to see all kinds of data problems that you never noticed 
before because they are scattered among thousands of records that previously 
could only be viewed individually.

Of course, fixing bad or inconsistent data is probably an order of magnitude 
easier than adding data to records after the fact.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Alex Dolski 
[alex.dol...@unlv.edu]
Sent: Monday, April 06, 2009 10:38 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Something completely different

I think Dublin Core XML is an excellent attempt at what you're talking
about if you want to consider it a bibliographic data format, which I
guess could be one of its many uses.

I know that a large percentage of the data in our MARC records is not
being used for finding/gathering or even display, so in that case, what
good is it? There is a lot of richness in those records, but it's so
all-over-the-place that whatever value it might have had gets killed by
all the inconsistency. In my experience, good, consistent metadata that
captures the essence of an object is more useful than highly-detailed,
inconsistent metadata (which all highly-detailed metadata tends to be)
in a fine-grained element set.

I think there may be a cultural element to this as well, in that IR
people think of metadata in terms of its utility for IR purposes (at
which DC tends to be extremely practical) and catalogers think of it as
a thorough-as-possible description of an object (at which DC is quite
inadequate).

Alex


Cloutman, David wrote:
> I'm open to seeing new approaches to the ILS in general. A related
> question I had the other day, speaking of MARC, is what would an
> alternative bibliographic data format look like if it was designed with
> the intent for opening access to the data our ILS systems to developers
> in a more informal manner? I was thinking of an XML format that a
> developer could work with without formal training, the basics of which
> could be learned in an hour, and could reasonably represent the
> essential fields of the 90% of records that are most likely to be viewed
> by a public library patron. In my mind, such a format would allow
> creators of community-based web sites to pull data from their local
> library, and repurpose it without having to learn a lot of arcane
> formats (e.g. MARC) or esoteric protocols (e.g. Z39.50). The sacrifice,
> of course, would be loosing some of the richness MARC allows, but I
> think in many common situations the really complex records are not what
> patrons are interested in. You may want to consider prototyping this in
> your application. I see such an effort to be vital in making our systems
> relevant in future computing environments, and I am skeptical that a
> simple, workable solution would come out the initial efforts of a
> standardization committee.
>
> Just my 2 cents.
>
> - David
>
> ---
> David Cloutman 
> Electronic Services Librarian
> Marin County Free Library
>
> -Original Message-
> From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of
> Peter Schlumpf
> Sent: Sunday, April 05, 2009 8:40 AM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: [CODE4LIB] Something completely different
>
>
> Greetings!
>
> I have been lurking on (or ignoring) this forum for years.  And
> libraries too.  Some of you may know me.  I am the Avanti guy.  I am,
> perhaps, the first person to try to produce an open source ILS back in
> 1999, though there is a David Duncan out there who tried before I did. I
> was there when all this stuff was coming together.
>
> Since then I have seen a lot of good things happen.  There's Koha.
> There's Evergreen.  They are good things.  I have also seen first hand
> how libraries get screwed over and over by commercial vendors with their
> crappy software.  I believe free software is the answer to that.  I have
> neglected Avanti for years, but now I am ready to return to it.
>
> I want to get back to simple things.  Imagine if there were no Marc
> records.  Minimal layers of abstraction.  No politics.  No vendors.  No
> SQL straightjacket.  What would an ILS look like without those things?
> Sometimes the biggest prison is between the ears.
>
> I am in a position to do this now, and that's what I have decided to do.
> I am getting busy.
>
> Peter S

Re: [CODE4LIB] Free cover images?

2009-03-16 Thread Walker, David
> However, my understanding is that Worldcat forbids any 
> use of those  cover images _at all_. 

OCLC does return the cover image URL as part of it's Z39.50 response, so I'm 
guessing that it is intended to be used by external applications, or at least 
those that are actually searching Worldcat.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jonathan 
Rochkind [rochk...@jhu.edu]
Sent: Monday, March 16, 2009 1:30 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Free cover images?

It would be hard for them to turn it off just for services that do not
have "the principal purpose" of "driving traffic to the Amazon website
and driving sales of products and services on the Amazon website."

Libraries are not alone in users of AWS who do not have this "principal
purpose". Despite that language, it's not clear to me that Amazon
actually has any particular interest in preventing such use.

But they wouldn't need to switch me off technologically, if I received
any communications from Amazon suggesting my use violates their ToS, I'd
immediately comply with their requests.  My further thoughts on this can
be found here:
http://bibwild.wordpress.com/2008/03/19/think-you-can-use-amazon-api-for-library-service-book-covers/


However, my understanding is that Worldcat forbids any use of those
cover images _at all_.  This is much more clear cut, and OCLC is much
more likely to care, then Amazon's more bizarre restrictions as to
purpose.  It's of course up to the individual implemeter, perhaps in
consultation with the service provider and/or legal counsel,  to decide
if they are complying or not, but that's my own evaluation.  I don't
even know of any WorldCat APIs that would allow you to get WorldCat
cover images other than through a screen-scrape though, so I'm curious
how anyone is doing it, if anyone is doing it.

Jonathan

Kyle Banerjee wrote:
> Yah, but same could be said for Amazon. From http://aws.amazon.com/agreement/
>
> 5.1.3. You are not permitted to use Amazon Associates Web Service with
> any Application or for any use that does not have, as its principal
> purpose, driving traffic to the Amazon Website and driving sales of
> products and services on the Amazon Website.
>
> Maybe libraries are under the radar, and maybe Amazon doesn't care,
> but getting addicted to this stuff is not without risk. If the load
> ever became something they cared about, they could turn it off in a
> snap.
>
> kyle
>
> On Mon, Mar 16, 2009 at 12:53 PM, Jonathan Rochkind  wrote:
>
>> You can get cover images from worldcat? How?  I'm pretty sure the worldcat
>> ToS specifically disallow you from re-using those covers, even if you are
>> managing to get them via machine access somehow.
>>
>> Lynch,Katherine wrote:
>>
>>> Going along with Jonathan Rochkind, Amazon does a good job of supplying
>>> some movie images.  Also in general, WorldCat, if that's an option to
>>> you.  For a good example of wealth/response time, check out Gabe's video
>>> search:
>>> http://www.library.drexel.edu/video/search
>>>
>>> ---
>>> Katherine Lynch
>>> Library Webmaster
>>> Drexel University Libraries
>>> 215.895.1344 (p)
>>> 215.895.2070 (f)
>>>
>>>
>>> -Original Message-
>>> From: Code for Libraries [mailto:code4...@listserv.nd.edu] On Behalf Of
>>> Edward M. Corrado
>>> Sent: Monday, March 16, 2009 2:38 PM
>>> To: CODE4LIB@LISTSERV.ND.EDU
>>> Subject: [CODE4LIB] Free cover images?
>>>
>>> Hello all,
>>>
>>> We are reevaluating our source of cover images. At this point I have
>>> identified four possible sources of free images:
>>>
>>> 1. Amazon
>>> 2. Google Books
>>> 3. LibraryThing
>>> 4. OpenLibrary
>>>
>>> I know that their is some question if the Amazon and Google books images
>>>
>>> will allow this (although I've also yet to hear Amazon or Google telling
>>>
>>> libraries that use their Web services for this to cease and desist).
>>> However, besides that issue, has anyone noticed any technical problems with
>>> any of these four? I'm especially concerned about slow and/or non-consistent
>>> performance.
>>>
>>> Edward
>>>
>>>
>>>
>
>
>
>


Re: [CODE4LIB] MARC-XML -> Qualified Dublin Core XSLT

2009-03-06 Thread Walker, David
Thanks Tom and Dana!

Dana, I bow to your superior Google searching skills. :-)

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Thomas G. 
Habing [thab...@illinois.edu]
Sent: Friday, March 06, 2009 12:41 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] MARC-XML -> Qualified Dublin Core XSLT

Look here:
http://imlsdcc.grainger.uiuc.edu/docs/stylesheets/GeneralMARCtoQDC.xsl

Kind regards,
Tom

--
Thomas G. Habing
Research Programmer
Grainger Engineering Library Information Center
University of Illinois at Urbana-Champaign

Walker, David wrote:
> Hi All,
>
> Anyone have an XSLT style sheet to convert from MARC-XML to Qualified Dublin 
> Core?
>
> I'm looking to load these into DSpace, if that makes a difference.  Looks 
> like LOC only has MARC-XML to Simple Dublin Core.  This page [1] mentions a  
> 'MARCXML to Qualified DC styles heets' developed at the University of 
> Illinois, but the links are dead.
>
> --Dave
>
> [1] http://cicharvest.grainger.uiuc.edu/schemas.asp
>
> ==
> David Walker
> Library Web Services Manager
> California State University
> http://xerxes.calstate.edu


[CODE4LIB] MARC-XML -> Qualified Dublin Core XSLT

2009-03-06 Thread Walker, David
Hi All,

Anyone have an XSLT style sheet to convert from MARC-XML to Qualified Dublin 
Core?  

I'm looking to load these into DSpace, if that makes a difference.  Looks like 
LOC only has MARC-XML to Simple Dublin Core.  This page [1] mentions a  
'MARCXML to Qualified DC styles heets' developed at the University of Illinois, 
but the links are dead.

--Dave

[1] http://cicharvest.grainger.uiuc.edu/schemas.asp

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu


Re: [CODE4LIB] Dutch Code4Lib

2009-01-22 Thread Walker, David
Some of us can barely afford to get to the east coast of the United States, let 
alone Europe.  Not that you have to cater to us poor state university folk, or 
anything. ;-)

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Edward M. 
Corrado [ecorr...@ecorrado.us]
Sent: Thursday, January 22, 2009 10:07 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Dutch Code4Lib

I know there was a talk about a code4lib Europe in Portugal before. I'd love
to see a European conference, but I am a little torn between making it a
separate conference from Code4lib or as a location to host for Code4lib.
What do people think?

Edward

On Thu, Jan 22, 2009 at 1:02 PM, Ed Summers  wrote:

> Wow, this sounds too good to be true. Perhaps this is premature, but
> do you think there might be interest in hosting a code4lib2010 in the
> Netherlands? (he asks selfishly).
>
> I see you started a wiki page [1]. If at any point you want
> nl.code4lib.org (or something) to point somewhere just say the word.
>
> //Ed
>
> [1] http://wiki.code4lib.org/index.php/NL
>
> On Wed, Jan 21, 2009 at 12:40 PM, Posthumus, Etienne
>  wrote:
> > At various tech related library gatherings here in the Netherlands there
> > have been discussion about setting up a regional Code4Lib (or something
> > similar)
> > So as a start, if there any subscribers on this list who are in the
> > vicinity of Netherlands/Belgium and interested, please give me a shout.
> >
> > We can then argue about a date/venue/agenda for an inaugural meeting.
> >
> > Etienne Posthumus
> > TU Delft Library   -  Digital Product Development
> > t: +31 (0) 15 27 81 949
> > m: e.posthu...@tudelft.nl
> > skype:  eposthumus
> > twitter: http://twitter.com/epoz
> > http://www.library.tudelft.nl/
> > Prometheusplein 1, 2628 ZC, Delft, Netherlands
> >
> >
>


Re: [CODE4LIB] MODS-to-citation stylesheets

2009-01-12 Thread Walker, David
> Do you know if it is specifically geared 
> toward Zotero's SQLite data structures?

I don't believe so, since the CSL standard, such that it is, predates Zotero.

I've worked with CSL a little bit in trying to create a PHP CSL rules engine.  
Not a trivial task, to say the least, but long-term it would serve you very 
well.  I think there are Python and Ruby CSL libraries, but my impression, like 
Jonathan's, is that they are still somewhat in the works.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Andrew Ashton 
[andrew_ash...@brown.edu]
Sent: Monday, January 12, 2009 8:06 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] MODS-to-citation stylesheets

Thanks, I have heard of CSL but never really worked with it.  Do you know if
it is specifically geared toward Zotero's SQLite data structures?  We're
interested in generating citations in web apps from a native XML database of
MODS, preferably without going through Zotero.

I've experimented with generating the citations from our eXist-based system
by way of Zotero, but we run into genre-authority issues.  Zotero's default
MODS import translator expects  and our
system uses other authorities.  Granted, I'm working off of some older
Zotero code, so I may not have the most recent info.

-Andy


On 1/12/09 10:51 AM, "Jonathan Rochkind"  wrote:

> What I've been meaning to investigate more fully is the "Citation Style
> Language" (CSL) which is used by Zotero for citation outputting--there
> are some other non-Zotero engines for CSL, but I'm not sure how
> mature/ready for production any of them are. The Zotero engine is of
> course in Javascript, so inconvenient (although not impossible) to
> re-use that code a server side app.
>
> I haven't really investigated what's going on with CSL, but that seems
> to be the 'right' way to deal with this to me. Once you have a CSL
> engine incorporated in your app, you can output not just in Chicago or
> MLA, but any citation style now or in the future that Zotero (or anyone
> else) provides a CSL file for. Thanks to Zotero (and it's partners?) for
> developing this re-useable CSL format instead of just a custom Zotero
> solution.
>
> Jonathan
>
> Andrew Ashton wrote:
>> Can someone point me at any good, freely-available stylesheets to convert
>> MODS to Chicago or MLA formatted citations?  It seems like something that
>> should be readily available, but I can¹t seem to find it.   I¹d rather not
>> reinvent the wheel if possible...
>>
>> Thanks, Andy
>>
>>


Re: [CODE4LIB] Good advanced search screens

2008-11-17 Thread Walker, David
> How about dispensing altogether with the
> basic/advanced dichotomy in a search interface?

I'm not sure I can dispense with it completely, Peter.

As Peter Morville said on the site Susan posted: "[I]t may be worth offering 
advanced features that are useful to a small yet important subset of users."  
I'll give you three guesses as to who my "small yet important subset of users" 
are, and the first two don't count. ;-)

--Dave
==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [EMAIL PROTECTED] On Behalf Of Peter Schlumpf [EMAIL 
PROTECTED]
Sent: Saturday, November 15, 2008 5:45 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Good advanced search screens

How about dispensing altogether with the basic/advanced dichotomy in a search 
interface?  Just create a well designed interface that's consistent and works 
well for all users.  The basic/advanced dichotomy is really quite arbitrary, 
and exists in the mind of the designer.

One thing that seems to be underappreciated these days is a straightforward and 
flexible search syntax.  A command line in the search field may be a much more 
elegant and consistent solution than trying to make all options available and 
visible in a GUI.

Make the basic features of the search interface clear and easy to use, but 
design the interface in such a way that more advanced users can easily 
"discover" the features they need as they use it.  With this approach Basic and 
Advanced exist on a continuum.  There's a little learning curve but all users 
will have the motivation to learn to use the interface to the level that 
satisfies their needs, and in the long run probably find it much easier to use.

Peter

Peter Schlumpf
[EMAIL PROTECTED]
http://www.avantilibrarysystems.com



-Original Message-
>From: "Walker, David" <[EMAIL PROTECTED]>
>Sent: Nov 14, 2008 4:48 PM
>To: CODE4LIB@LISTSERV.ND.EDU
>Subject: [CODE4LIB] Good advanced search screens
>
>I'm working on an advanced search screen as part of our WorldCat API project.
>
>WorldCat has dozens of indexes and a ton of limiters.  So many, in fact, that 
>it's rather daunting trying to design it all in a way that isn't just a big 
>dump of fields and check boxes that only a cataloger could decipher.
>
>So I'm looking for examples of good advanced search screens (for bibliographic 
>databases or otherwise) to gain some inspiration.  Thanks!
>
>--Dave
>
>==
>David Walker
>Library Web Services Manager
>California State University
>http://xerxes.calstate.edu


[CODE4LIB] Good advanced search screens

2008-11-14 Thread Walker, David
I'm working on an advanced search screen as part of our WorldCat API project.

WorldCat has dozens of indexes and a ton of limiters.  So many, in fact, that 
it's rather daunting trying to design it all in a way that isn't just a big 
dump of fields and check boxes that only a cataloger could decipher.

So I'm looking for examples of good advanced search screens (for bibliographic 
databases or otherwise) to gain some inspiration.  Thanks!

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu


Re: [CODE4LIB] Google books js api, oclc/lccn, any problems?

2008-10-15 Thread Walker, David
>So, I would assume that the 2416076 record
> was merged into the 24991049 record

Or maybe this is an example of WorldCat's FRBR work set grouping at work?  I've 
been struggling to wrap my mind around this recently.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [EMAIL PROTECTED] On Behalf Of Custer, Mark [EMAIL 
PROTECTED]
Sent: Wednesday, October 15, 2008 12:45 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Google books js api, oclc/lccn, any problems?

I haven't noticed any problems like that myself lately, but I have had
some trouble/confusion with OCLC numbers and GBS in the past.

If you do a search for OCLC number 2416076 in worldcat.org, you get
directed to the book:

http://www.worldcat.org/search?q=no%3A2416076&qt=advanced  [which, of
course, features no link to Google Books right now]

but the OCLC number listed is 24991049, and no longer 2416076.  So, I
would assume that the 2416076 record was merged into the 24991049 record
(which was just recently updated on 2008-08-28), and so I would also
assume that the records retained by Google would not reflect this
update?  In any event, I would suspect that it might not be a problem
with the GBS api, but rather with the change in metadata that is tracked
by OCLC but not by Google (since they wouldn't have known).

Do you have any other suspect examples, though, that might provide other
evidence?


Mark Custer


-Original Message-
From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
Jonathan Rochkind
Sent: Wednesday, October 15, 2008 3:04 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] Google books js api, oclc/lccn, any problems?

Anyone else noticed any problems with the GBS javascript api?

It seems to have stopped returning hits for me for LCCN or OCLCnum,
where it used to work. Seems to work now only for ISBN.

Here's a URL call that used to return hits, and now doesn't:

http://books.google.com/books?jscmd=viewapi&callback=gbscallback&bibkeys
=OCLC%3A2416076%2CLCCN%3A34025476

Jonathan

---
Jonathan Rochkind
Digital Services Software Engineer
The Sheridan Libraries
Johns Hopkins University
410.516.8886
[EMAIL PROTECTED]


Re: [CODE4LIB] OAI-PMH Harvester in PHP?

2008-10-06 Thread Walker, David
Thanks everyone!

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [EMAIL PROTECTED] On Behalf Of Mark Jordan [EMAIL 
PROTECTED]
Sent: Monday, October 06, 2008 2:02 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] OAI-PMH Harvester in PHP?

David, if you need a harvester with a web GUI for administration and searching, 
check out the PKP Metadata Harvester at http://pkp.sfu.ca/harvester, which runs 
on PHP and mysql. We've got a development version in cvs that uses Lucene for 
indexing, if you want more info let me know.

Small world, I was looking at some of your WorldCat presentations just now

Mark

Mark Jordan
Head of Library Systems
W.A.C. Bennett Library, Simon Fraser University
Burnaby, British Columbia, V5A 1S6, Canada
Voice: 778.782.5753 / Fax: 778.782.3023
[EMAIL PROTECTED]

- "David Walker" <[EMAIL PROTECTED]> wrote:

> Hi all,
>
> Anyone know of any OAI-PMH harvesting software written in PHP?  I've
> seen the code that can serve as a provider, but I'm looking for a
> harvester.
>
> Thanks!
>
> --Dave
>
> ==
> David Walker
> Library Web Services Manager
> California State University
> http://xerxes.calstate.edu


[CODE4LIB] OAI-PMH Harvester in PHP?

2008-10-06 Thread Walker, David
Hi all,

Anyone know of any OAI-PMH harvesting software written in PHP?  I've seen the 
code that can serve as a provider, but I'm looking for a harvester.

Thanks!

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu


Re: [CODE4LIB] creating call number browse

2008-09-17 Thread Walker, David
> a decent UI is probably going to be a bigger job

I've always felt that the call number browse was a really useful option, but 
the most disastrously implemented feature in most ILS catalog interfaces.

I think the problem is that we're focusing on the task -- browsing the shelf -- 
as opposed to the *goal*, which is, I think, simply to show users books that 
are related to the one they are looking at.

If you treat it like that (here are books that are related to this book) and 
dispense with the notion of call numbers and shelves in the interface (even if 
what you're doing behind the scenes is in fact a call number browse) then I 
think you can arrive at a much simpler and straight-forward UI for users.  I 
would treat it little different than Amazon's recommendations feature, for 
example.

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [EMAIL PROTECTED] On Behalf Of Stephens, Owen [EMAIL 
PROTECTED]
Sent: Wednesday, September 17, 2008 9:17 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] creating call number browse

I'm not sure, but my guess would be that the example you give isn't
really a 'browse index' function, but rather creates a search result set
and presents it in a specific way (i.e. via cover images) sorted by call
number (by the look of it, it has an ID of the bib record as input, and
it displays this book and 10 before it, and 10 after it, in call number
order.

Whether this is how bibliocommons achieves it or not is perhaps besides
the point - this is how I think I would approach it. I'm winging it
here, but if I was doing some quick and very dirty here:

A simple db table with fields:

Database ID (numeric counter auto-increment)
Bib record ID
URIs to book covers (or more likely the relevant information to create
the URIs such as ISBN)
Call number

To start, get a report from your ILS with this info in it, sorted by
Call Number. To populate the table, import your data (sorted in Call
Number order). The Database ID will be created on import, automatically
in call number order (there are other, almost certainly better, ways of
handling this, but this is simple I think)

To create your shelf browse given a Bib ID select that record and get
the database ID. Then requery selecting all records which have database
IDs +-10 of the one you have just retrieved.

Output results in appropriate format (e.g. html) using book cover URIs
to display the images.

Obviously with this approach, you'd need to recreate your data table
regularly to keep it up to date (resetting your Database ID if you
want).

Well - just how I'd do it if I wanted something up and running quickly.
As Andy notes, a decent UI is probably going to be a bigger job ;)

Owen

Owen Stephens
Assistant Director: eStrategy and Information Resources
Central Library
Imperial College London
South Kensington Campus
London
SW7 2AZ

t: +44 (0)20 7594 8829
e: [EMAIL PROTECTED]

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf
Of
> Emily Lynema
> Sent: 17 September 2008 16:46
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: [CODE4LIB] creating call number browse
>
> Hey all,
>
> I would love to tackle the issue of creating a really cool call number
> browse tool that utilizes book covers, etc. However, I'd like to do
> this
> outside of my ILS/OPAC. What I don't know is whether there are any
> indexing / SQL / query techniques that could be used to browse forward
> and backword in an index like this.
>
> Has anyone else worked on developing a tool like this outside of the
> OPAC? I guess I would be perfectly happy even if it was something I
> could build directly on top of the ILS database and its indexes (we
use
> SirsiDynix Unicorn).
>
> I wanted to throw a feeler out there before trying to dream up some
> wild
> scheme on my own.
>
> -emily
>
> P.S. The version of BiblioCommons released at Oakville Public Library
> has a sweet call number browse function accessible from the full
record
> page. I would love to know know how that was accomplished.
>
> http://opl.bibliocommons.com/item/show/1413841_mars
>
> --
> Emily Lynema
> Systems Librarian for Digital Projects
> Information Technology, NCSU Libraries
> 919-513-8031
> [EMAIL PROTECTED]


[CODE4LIB] Innovative DLF ILS-DI code WAS: [CODE4LIB] Update: DLF ILS-DI Developers' Workshop Aug 7

2008-07-17 Thread Walker, David
Hi all,

I'm working on converting a screen-scraping class, written in PHP, I have for 
looking-up bib and availability information in an Innovative systems to the new 
ILS-DI specification, and had a couple of questions:

1. Is there a place (other than the workshop) to discuss issues or questions I 
might have?  A listserv perhaps?

2. Is anyone else thinking about, or currently working on, an implementation 
for Innovative?

Since the company has not agreed to work with the library community on this, 
we're kind of on our own.  I've got a pretty good scraper that can accommodate 
most of the abstract functions in the spec.  But wanted to see if others did 
too, so we might combine efforts.

Thanks!

--Dave

==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Code for Libraries [EMAIL PROTECTED] On Behalf Of Emily Lynema [EMAIL 
PROTECTED]
Sent: Thursday, July 10, 2008 9:22 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] Update: DLF ILS-DI Developers' Workshop Aug 7

Now that the DLF technical recommendation is officially published [1], DLF
is trying to help maintain momentum and build a community of implementation
around this project. Toward that end, an ILS-DI Developers' Workshop has
been organized in August for folks to hash out questions and answers about
implementing the first level of the recommendation, Basic Discovery
Interfaces. While this meeting is invitation only to keep the size down,
feel free to let me know if you are involved in this type of implementation
and think you could contribute to this meeting.

Of course, a summary of the outcome of the meeting will be made available in
its aftermath. It is even possible there may be some suggested revisions or
clarifications to the recommendation as we actually begin to write code.

I've included the text of the original inviitation below for all to see. We
hope to keep this topic of APIs and interoperability for our integrated
library systems fresh on your mind, especially as some many of you are
building these types of APIs literally as we speak

-emily lynema

[1] http://diglib.org/architectures/ilsdi/
-

Greetings -

As you may know, the Digital Library Federation has released
the technical recommendation of its ILS Discovery Interface
(ILS-DI) Task Group.  This document recommends basic, standard
interfaces -- known as the Berkeley Accord -- for integrating
the data and services of integrated library systems (ILS) with
new applications supporting user discovery.  The documentation
is available at : http://diglib.org/architectures/ilsdi/ .

The basic discovery interfaces permit libraries to deploy new
discovery services to meet ever-growing user expectations in
the Web 2.0 era, take full advantage of advanced ILS data
management and services, and encourage a strong, innovative
community and marketplace in next-generation library management
and discovery applications.

DLF is planning a developer's workshop for Thursday, August 7,
at the Berkeley Faculty Club on the UC Berkeley campus, in
which parties supporting the Basic Discovery Interfaces can
learn more about the interfaces and how they should be
implemented, meet with potential development partners, and
begin the formation of a community building effective software
services.  Because of the nature of this meeting, we recommend
that staff with a high degree of technical knowledge of your
platform and bibliographic standards and protocols receive
priority for attendance.

The Berkeley Accord and the DLF ILS-DI recommendation are
important first steps in building advanced, interoperable
architectures for bibliographic discovery and use in the
networked world.


[CODE4LIB] Webfeet, Encompass WAS: Ser Sol 360 Search

2008-07-11 Thread Walker, David
Thanks to everyone who responded to my earlier request.

On to the next system: If your library licenses the Webfeat metasearch system, 
would you mind contacting me off-list?  I have similar questions to ask you all.

Also -- and I realize I'm reaching here -- if you happen to have the 
now-defunct Endeavor Encompass system still up and running somewhere (even if 
its out of public view) would you mind contacting me.

Thanks!

--Dave


==
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu

From: Walker, David
Sent: Monday, July 07, 2008 8:57 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Ser Sol 360 Search

Hi All,

I'm giving a conference presentation later this month on metasearch.  If your 
library licenses Serial Solutions' metasearch system, would you mind contacting 
me off-list?  I'd like to ask a couple of questions.  Thanks!

--Dave


---
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu


[CODE4LIB] Ser Sol 360 Search

2008-07-07 Thread Walker, David
Hi All,
 
I'm giving a conference presentation later this month on metasearch.  If your 
library licenses Serial Solutions' metasearch system, would you mind contacting 
me off-list?  I'd like to ask a couple of questions.  Thanks!
 
--Dave
 
 
---
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu


Re: [CODE4LIB] III SIP server

2008-06-13 Thread Walker, David
Brilliant.  Thanks Mark!

--Dave


---
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu



From: Code for Libraries on behalf of Mark Ellis
Sent: Thu 6/12/2008 10:14 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] III SIP server



All,

I've attached two versions of a SIP client script that retrieves patron
information--one for telnet based servers and the other for sockets
based ones. All the functions excepting PatronInformation() are
applicable to other SIP messages, so while this isn't the full client
library you're dreaming about, it could still save you some head
banging. (you may still want to bang my head after looking at it though)

The 3M SIP2 SDK (http://www.yourlibrary.ca/mark/SIP2_SDK.ZIP) includes
the protocol definition along with a Windows client and server you can
use for testing.  The client is particularly useful as you can use it
interactively with your ILS.

HTH,

Mark

-Original Message-
From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
Walker, David
Sent: Wednesday, June 11, 2008 3:00 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] III SIP server

I'd like to see the PHP code, Mark.  Would you mind sending it to me, or
perhaps posting it somewhere where we all might download it?

Thanks!

--Dave

---
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu



From: Code for Libraries on behalf of Mark Ellis
Sent: Wed 6/11/2008 8:42 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] III SIP server



Wayne,

What are you using for a client?  I have some PHP for getting patron
information, but there's nothing III specific about it, so I don't know
if it'd be helpful.  Do you have the 3M SIP SDK?

Mark

Mark Ellis
Manager, Information Technology
Richmond Public Library
Richmond, BC
(604) 231-6410
www.yourlibrary.ca


-Original Message-
From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
Schneider, Wayne
Sent: Tuesday, June 10, 2008 4:29 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] III SIP server

Has anyone out there attempted to code to III's SIP server?  We're new
to III, having just merged with another library system that is a III
customer, and were hoping to be able to use SIP for some basic customer
account information - nothing too fancy, just basically some of what is
supported in version 2.00 of the protocol.  Name and address would be
nice (name we seem to get, but no address), items out, items on hold,
fines and fees, etc.  Our other ILS, SirsiDynix Horizon, has pretty good
support for SIP 2.00 features, only somewhat idiosyncratic, with a few
fairly well-documented extensions, and we were hoping to find the same
level of support in III's server.  Is this an entirely unreasonable
expectation?

wayne
--
Wayne Schneider
ILS System Administrator
Hennepin County Library
952.847.8656
[EMAIL PROTECTED]


Re: [CODE4LIB] III SIP server

2008-06-11 Thread Walker, David
I'd like to see the PHP code, Mark.  Would you mind sending it to me, or 
perhaps posting it somewhere where we all might download it?

Thanks!

--Dave

---
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu



From: Code for Libraries on behalf of Mark Ellis
Sent: Wed 6/11/2008 8:42 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] III SIP server



Wayne,

What are you using for a client?  I have some PHP for getting patron
information, but there's nothing III specific about it, so I don't know
if it'd be helpful.  Do you have the 3M SIP SDK?

Mark

Mark Ellis
Manager, Information Technology
Richmond Public Library
Richmond, BC
(604) 231-6410
www.yourlibrary.ca


-Original Message-
From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
Schneider, Wayne
Sent: Tuesday, June 10, 2008 4:29 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] III SIP server

Has anyone out there attempted to code to III's SIP server?  We're new
to III, having just merged with another library system that is a III
customer, and were hoping to be able to use SIP for some basic customer
account information - nothing too fancy, just basically some of what is
supported in version 2.00 of the protocol.  Name and address would be
nice (name we seem to get, but no address), items out, items on hold,
fines and fees, etc.  Our other ILS, SirsiDynix Horizon, has pretty good
support for SIP 2.00 features, only somewhat idiosyncratic, with a few
fairly well-documented extensions, and we were hoping to find the same
level of support in III's server.  Is this an entirely unreasonable
expectation?

wayne
--
Wayne Schneider
ILS System Administrator
Hennepin County Library
952.847.8656
[EMAIL PROTECTED]


Re: [CODE4LIB] Life after Expect

2008-05-15 Thread Walker, David
Going back to the original topic here a bit . . 
 
> Is their any hope for those of us who 
> rely on our Expect-monkeys in III?

There are, of course, a number of marco-type programs out there that can 
emulate key strokes and mouse clicks in order to interface with the Millennium 
Java client.  You could probably use these to achieve the same automated tasks 
your Expect scripts were performing.
 
I don't really do this stuff myself, but one of our ILS admins here uses a free 
application called AutoIT to automate loading of data into Innovative by way of 
the Millennium Java client.
 
--Dave
 
---
David Walker
Library Web Services Manager
California State University
http://xerxes.calstate.edu



From: Code for Libraries on behalf of Ken Irwin
Sent: Wed 5/14/2008 4:02 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Life after Expect



Is their any hope for those of us who rely on our Expect-monkeys in III?
My most important Expect scripts use the create-list function, and I
hope that'll stay around for a while. But I'm sure they'll eventually go
away too.

Has III shown any interest in building in their own macros/automation
features to do the sorts of tasks for which we rely on Expect?

Ken

Kyle Banerjee wrote:
> Last week, III announced that they are removing a number of
> circulation functions from the telnet menus in a software update that
> became generally available this month. From what I've been able to
> surmise, functions that will be removed include placing holds and
> checking things in or or out. Removing these menu options will break
> scripts that have been in use for years at institutions in our
> consortium, and lots more staff time will be required to perform
> certain tasks after some systems are upgraded.
>
> Apparently, III recently discovered that a bug involving holds was
> caused by the character-based system, but it is also related to a
> desire to port everything to Millennium. Based on the reasoning behind
> the announcement, future updates are likely result in other mission
> critical scripts breaking as other character-based functionality is
> deprecated.
>
> Just a reminder of the risks of relying on automation that depend on
> interfaces that are losing vendor support.
>
> kyle
>

--
Ken Irwin
Reference Librarian
Thomas Library, Wittenberg University


  1   2   >