Re: [CODE4LIB] Heroku

2016-06-08 Thread Harper, Cynthia
Thanks! I'd heard of Heroku, but hadn't understood why it might be just what 
I'm looking for!
Cindy

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of James 
Fournie
Sent: Wednesday, June 08, 2016 2:08 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Heroku

Heroku is a Platform-as-a-service (PaaS) product.

Amazon Web Services (AWS) is a series of many different services, they offer a 
service similar to Heroku called AWS Elastic Beanstalk.

AWS Elastic Compute Cloud (EC2) is usually the service people are thinking of 
when they think of AWS.  EC2 is what is called Infrastructure-as-a-service 
(IaaS).  IaaS like EC2 gives you a virtual machine, usually a Linux server. You 
maintain the Linux server, install dependencies such as Ruby, MySQL, PHP, 
Apache, etc, install updates, etc and are essentially the sysadmin for that 
server.  You upload your application and keep it running on there and maintain 
things.

With PaaS like Heroku, most of that Linux sysadmin stuff is abstracted away and 
largely done by automatically. Instead, you just create a simple configuration 
file to tell Heroku what kind of application you have (Ruby on Rails, Python, 
PHP, Java, etc), and maybe also what services you need (MySQL, PostgreSQL, 
Redis, etc) and then upload your code.  The PaaS will automatically install 
dependencies and wire things up for you and make things just magically work for 
you, you don't need to worry about monitoring the server, upgrades, etc.  PaaS 
makes it simpler to deploy and maintain an app, you don't need to worry as much 
about being a sysadmin.

There are some drawbacks:
- it might not be quite as flexible as a full VM depending on your needs
- you usually must adhere to certain app development methodologies,
ie: 12-factor apps http://www.12factor.net/ (but this can be a benefit
also)
- sometimes there is a little bit of vendor lock-in -- you often must make some 
minor changes to your application if you want to move to a different vendor

Hope this helps :)

~James


On Wed, Jun 8, 2016 at 6:43 AM, Harper, Cynthia <char...@vts.edu> wrote:
> How does it compare to Amazon Web Services?
> Cindy Harper
>
> -Original Message-
> From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
> Of Andromeda Yelton
> Sent: Tuesday, June 07, 2016 9:50 PM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] Heroku
>
> I'm a freelance software developer not embedded in a library, but I use 
> Heroku routinely to host apps I'm developing for fun, or as a testing site, 
> and one of my clients deploys its production app on Heroku. It took me a 
> while to wrap my head around, but I love it to little tiny pieces (and once 
> you do wrap your head around it, it becomes *unbelievably* straightforward).
> Do you have any more specific questions?
>
> On Mon, Jun 6, 2016 at 3:15 PM, Louisa Choy <lc...@wheelock.edu> wrote:
>
>> Hi everyone,
>>
>> My college is using Heroku to host a web application for another 
>> department.  I'm trying to get a sense of how many institutions out 
>> there are using it, what you use it for, what the pool of expertise 
>> is like for it, and what your thoughts on it are.
>>
>> Thanks!
>> -Louisa
>>
>>
>> Louisa Choy
>> Digital Services Librarian
>> Wheelock College Library
>> 132 Riverway
>> Boston, MA   02215
>> (617) 879-2213
>> www.wheelock.edu/library
>> (she/her/hers)
>>
>
>
>
> --
> Andromeda Yelton
> Board of Directors/Vice-President Elect, Library & Information 
> Technology
> Association: http://www.lita.org
> http://andromedayelton.com
> @ThatAndromeda <http://twitter.com/ThatAndromeda>


Re: [CODE4LIB] Heroku

2016-06-08 Thread Harper, Cynthia
How does it compare to Amazon Web Services?
Cindy Harper

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
Andromeda Yelton
Sent: Tuesday, June 07, 2016 9:50 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Heroku

I'm a freelance software developer not embedded in a library, but I use Heroku 
routinely to host apps I'm developing for fun, or as a testing site, and one of 
my clients deploys its production app on Heroku. It took me a while to wrap my 
head around, but I love it to little tiny pieces (and once you do wrap your 
head around it, it becomes *unbelievably* straightforward).
Do you have any more specific questions?

On Mon, Jun 6, 2016 at 3:15 PM, Louisa Choy  wrote:

> Hi everyone,
>
> My college is using Heroku to host a web application for another 
> department.  I'm trying to get a sense of how many institutions out 
> there are using it, what you use it for, what the pool of expertise is 
> like for it, and what your thoughts on it are.
>
> Thanks!
> -Louisa
>
>
> Louisa Choy
> Digital Services Librarian
> Wheelock College Library
> 132 Riverway
> Boston, MA   02215
> (617) 879-2213
> www.wheelock.edu/library
> (she/her/hers)
>



--
Andromeda Yelton
Board of Directors/Vice-President Elect, Library & Information Technology
Association: http://www.lita.org
http://andromedayelton.com
@ThatAndromeda 


[CODE4LIB] Related question about xID services - was RE: "Form" dictionary for xID Service - xisbn getEditions

2016-05-04 Thread Harper, Cynthia
And is the reason I can't get a response to this request, but you apparently 
do, is that the service is only available now for previously registered users?  
What has replaced it for those who don't subscribe to Worldcat Discovery or 
Firstsearch or WMS?

Cindy Harper

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Mark 
Witteman
Sent: Wednesday, May 04, 2016 3:23 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] "Form" dictionary for xID Service - xisbn getEditions

Greetings code4lib gang,

First time poster, five-month lurker.

Try as I might, I cannot seem to find a dictionary for the values for "form" in 
the response for the OCLC xID services.

For example, in the response to this request:
http://xisbn.worldcat.org/webservices/xid/isbn/978528300?method=getEditions=json=*

What do these two values mean (BA and BC) and what other values for "form" can 
occur?

"form":["BA"],

"form":["BC"],

Sincerely,

Mark
--
Mark H. Witteman | Technical Consultant | +1 225-578-0421
LOUIS: The Louisiana Library Network


Re: [CODE4LIB] [patronprivacy] Let's Encrypt and EZProxy

2016-04-18 Thread Harper, Cynthia
So, the reason we wanted to proxy scholar.google.com is that google provides 
link resolver links if the requesting IP is our domain. Otherwise, each remote 
user will have to configure their own browser.  GoDaddy wouldn't let me add 
scholar.google.com.librarycatalog.vts.edu to my certificate. So unless there's 
a script we can use to let users click on a link and automatically register 
their scholar link resolvers, we have to explain in further detail to them.  
That's the only proxying of google.com that we wanted to do.

Cindy Harper

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Andrew 
Anderson
Sent: Saturday, January 16, 2016 3:25 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] [patronprivacy] Let's Encrypt and EZProxy

On Jan 15, 2016, at 13:20, Salazar, Christina  
wrote:

> Something that I also see implied here is why aren't vendors doing a better 
> job collaborating with the developers of EZProxy, instead of only putting the 
> pressure on Let's Encrypt to support wildcard certs (although I kind of think 
> that's the better way to go).


Because it's easier than actually taking the time to fully understand the 
platforms and how all the pieces fit together.  

I've lost track of how many discussions I have had with various vendors 
recently over:

* Why they need to encode URLs before trying to pass them to another service 
like EZproxy's login handler
* Why they really do need to pay attention to what RFC 2616 Section 3.2.2 and 
RFC 2396 Section 2.2 have to say regarding the use of the reserved character in 
URLs
* Why it's a bad idea to add "DJ google.com" in the EZproxy stanza
* Why it's a bad idea to add "DJ " in the EZproxy stanza
* Why it's a bad idea to add "DJ " in the EZproxy 
stanza

Instead of trying to understand how proxied access works, someone just keeps 
slapping "DJ " or "HJ " into the service stanza 
until the service starts working, and then never revisits the final product to 
see if those additions were really necessary.  Do this for a few platform 
iterations, and the resulting stanza can become insane.

The conversations typically go something like this:

Me: "Why are you trying to proxy google.com services?" 
Vendor: "Because we're loading the jQuery JavaScript library from their CDN."
Me: "And how are you handling registering all your customer's IP addresses with 
Google?" 
...  ... 
Vendor: "We don't".
Me: "Then why do you think you need that in your proxy stanza?". 
...  ...
Vendor: "We . . . don't?"
Me: "Exactly. And how are you reaping the performance benefits of a CDN service 
if you're funneling all of the unauthenticated web traffic through a proxy 
server instead of allowing the CDN to do what it does best and keeping the 
proxy server out of the middle of that transaction?"
Vendor: "We . . . aren't?"
Me: "That's right, by adding 'DJ ' to your stanza, you have 
successfully negated the performance benefits of using a CDN service."

-- 
Andrew Anderson, President & CEO, Library and Information Resources Network, 
Inc.
http://www.lirn.net/ | http://www.twitter.com/LIRNnotes | 
http://www.facebook.com/LIRNnotes


Re: [CODE4LIB] "Illegal Aliens" subject heading

2016-04-18 Thread Harper, Cynthia
Actually - now that I think of it, maybe this is the controversy we need to get 
our catalogs and discovery engines to make better use of our cross-references, 
make them more visible and easier to use.
Cindy harper

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Galen 
Charlton
Sent: Monday, April 18, 2016 11:00 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] "Illegal Aliens" subject heading

Hi,

On Mon, Apr 18, 2016 at 10:28 AM, Eric Hellman  wrote:
> I also think that Code4Lib is potentially more powerful than congress 
> in this situation. LC says that "all of the revisions will appear on a 
> Tentative List and be approved no earlier than May 2016; the revision 
> of existing bibliographic records will commence shortly thereafter." 
> It seems unlikely that Congress can act before this happens. We could 
> then implement systems that effect this subject heading deprecation 
> without regard to Rep. Diane Black and Congress. We can scrub the MARC 
> records. We can alter the cataloguing interfaces. We could tweak the 
> cataloguing standard.

Or to put it another way, "we" could make a (hopefully friendly) fork of LCSH 
if it gets compromised via an act of law.

Such a fork could provide benefits going far beyond protesting Congressional 
interference in LCSH:

* If appropriate tools for collaboration are built, it could allow updates to 
be made faster than what the current SACO process permits, while still 
benefiting from the careful work of LC subject experts.
* It could provide infrastructure for easily creating additional forks of the 
vocabulary, for cases where LCSH is a decent starting point but needs 
refinement for a particular collection of things to be described.

However, I put "we" in quotes because such an undertaking could not succeed 
simply by throwing code at the problem. There are many Code4Lib folks who could 
munge authority records, build tools for collaborative thesaurus maintenance, 
stand up SPARQL endpoints and feeds of headings changes and so forth — but 
unless that fork provides infrastructure that catalogers and metadataists 
/want/ to use and has some guarantee of sticking around, the end result would 
be nothing more than fodder for a C4L Journal article or two.

> What else would we need?

Involvement of folks who might use and contribute to such a fork from the 
get-go, and early thought to how such a fork can be sustained. I think we 
already have the technology, for the most part; the question is whether we have 
the people.

Regards,

Galen
--
Galen Charlton
Infrastructure and Added Services Manager Equinox Software, Inc. / Open Your 
Library
email:  g...@esilibrary.com
direct: +1 770-709-5581
cell:   +1 404-984-4366
skype:  gmcharlt
web:http://www.esilibrary.com/
Supporting Koha and Evergreen: http://koha-community.org & 
http://evergreen-ils.org


Re: [CODE4LIB] "Illegal Aliens" subject heading

2016-04-18 Thread Harper, Cynthia
Images of a bi-lingual catalog - Republicanese and Democratese.

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Galen 
Charlton
Sent: Monday, April 18, 2016 11:00 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] "Illegal Aliens" subject heading

Hi,

On Mon, Apr 18, 2016 at 10:28 AM, Eric Hellman  wrote:
> I also think that Code4Lib is potentially more powerful than congress 
> in this situation. LC says that "all of the revisions will appear on a 
> Tentative List and be approved no earlier than May 2016; the revision 
> of existing bibliographic records will commence shortly thereafter." 
> It seems unlikely that Congress can act before this happens. We could 
> then implement systems that effect this subject heading deprecation 
> without regard to Rep. Diane Black and Congress. We can scrub the MARC 
> records. We can alter the cataloguing interfaces. We could tweak the 
> cataloguing standard.

Or to put it another way, "we" could make a (hopefully friendly) fork of LCSH 
if it gets compromised via an act of law.

Such a fork could provide benefits going far beyond protesting Congressional 
interference in LCSH:

* If appropriate tools for collaboration are built, it could allow updates to 
be made faster than what the current SACO process permits, while still 
benefiting from the careful work of LC subject experts.
* It could provide infrastructure for easily creating additional forks of the 
vocabulary, for cases where LCSH is a decent starting point but needs 
refinement for a particular collection of things to be described.

However, I put "we" in quotes because such an undertaking could not succeed 
simply by throwing code at the problem. There are many Code4Lib folks who could 
munge authority records, build tools for collaborative thesaurus maintenance, 
stand up SPARQL endpoints and feeds of headings changes and so forth — but 
unless that fork provides infrastructure that catalogers and metadataists 
/want/ to use and has some guarantee of sticking around, the end result would 
be nothing more than fodder for a C4L Journal article or two.

> What else would we need?

Involvement of folks who might use and contribute to such a fork from the 
get-go, and early thought to how such a fork can be sustained. I think we 
already have the technology, for the most part; the question is whether we have 
the people.

Regards,

Galen
--
Galen Charlton
Infrastructure and Added Services Manager Equinox Software, Inc. / Open Your 
Library
email:  g...@esilibrary.com
direct: +1 770-709-5581
cell:   +1 404-984-4366
skype:  gmcharlt
web:http://www.esilibrary.com/
Supporting Koha and Evergreen: http://koha-community.org & 
http://evergreen-ils.org


Re: [CODE4LIB] LCSH, Bisac, facets, hierarchy?

2016-04-13 Thread Harper, Cynthia

From a librarian’s perspective, we know searching is messy – a researcher can’t 
hope to find the perfect subject heading that will reveal all their related 
content in one term.  Searching is exploring through overlapping terms, and 
compiling a bibliography from the pearls found in the process. This interface 
makes clearer what the related terms may be, given a borad term like practical 
theology.  And it’s so nice that it combines the classification structure with 
the subject headings.

Cindy Harper
@vts.edu

-Original Message-
From: Code for Libraries 
[mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
Kent Fitch
Sent: Wednesday, April 13, 2016 8:17 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] LCSH, Bisac, facets, hierarchy?
About ten years ago, I was wondering how to make the structure in LCSH, or at 
least how it was encoded in MARC subject tags more useful, so when implementing 
a prototype for a new library catalogue at the National Library of Australia, I 
tried using the subject tag contents to represent a hierarchy, then counted the 
number of hits against parts of that hierarchy for a given search and then 
represented the subject tags in a hierarchy
with hit counts.   One of the motivations was to help expose to the
searcher how works relevant to their search may have been 
LCSH-subject-catalogued.

I'm a programmer, not a UI person, so the formatting of theresults were fairly 
primitive, but that prototype from ten years ago ("Library Labs") is still 
running.

For example, search results for /ancient egypt/

http://ll01.nla.gov.au/search.jsp?searchTerm=ancient+egypt=0.5=0.05=12.0=9.0=9.0=9.0=4.0=3.0=3.0=3.0=18.0=15.0

/computer art/

http://ll01.nla.gov.au/search.jsp?searchTerm=computer+art=0.5=0.05=12.0=9.0=9.0=9.0=4.0=3.0=3.0=3.0=18.0=15.0

/history of utah/

http://ll01.nla.gov.au/search.jsp?searchTerm=history+of+utah=0.5=0.05=12.0=9.0=9.0=9.0=4.0=3.0=3.0=3.0=18.0=15.0

This prototype also explored a subject hierarchy which had been of interest to 
the NLA's Assistant Director-General, Dr Warwick Cathro, over many years, the 
RLG "Conspectus" hierarchy, which I guess was not unlike BISAC in its aims.  It 
is shown further down the right-hand column.

Both the subject hierarchy and Conspectus were interesting, but neither made it 
into the eventual production search system, Trove, implemented at the NLA, in 
which subject faceting or hierarchy is absent from results
display:

http://trove.nla.gov.au/book/result?q=ancient+egypt
http://trove.nla.gov.au/book/result?q=computer+art
http://trove.nla.gov.au/book/result?q=history+of+utah

The "Library Labs" prototype is running on a small VM, so searching may be 
slow, and it hasnt been updated with any content since 2006..  But maybe the 
way it attempted to provide subject grouping and encourage narrowing of search 
by LCSH or exploring using LCSH rather than the provided search terms may 
trigger some more experiments.

Kent Fitch

On Thu, Apr 14, 2016 at 3:11 AM, Mark Watkins 
>
wrote:

>  :)
>
> sounds like there is a lot of useful metadata but somewhat scattered
> amongst various fields, depending on when the item was cataloged or tagged.
> Which seems to correspond to anecdotal surfing of the Harvard data.
>
> I guess my new task is to build something that aggregates and
> reconciles portions of LCSH, LCFGT, and GSAFD :).
>
> Thanks for the additional perspective!
>



Re: [CODE4LIB] LCSH, Bisac, facets, hierarchy?

2016-04-13 Thread Harper, Cynthia
Clicking on the link "409" from "Philosophy & Religion > Religion" I get:
Object not found!

The requested URL was not found on this server. If you entered the URL manually 
please check your spelling and try again.

If you think this is a server error, please contact the webmaster.
Error 404
wwwapp.cc.columbia.edu
Wed Apr 13 13:54:34 2016
Apache

I'll contact the webmaster.

Cindy Harper

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of William 
Denton
Sent: Wednesday, April 13, 2016 10:39 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] LCSH, Bisac, facets, hierarchy?

On 13 April 2016, Mark Watkins wrote:

> I'm a library sciences newbie, but it seems like LCSH doesn't really 
> provide a formal hierarchy of genre/topic, just a giant controlled 
> vocabulary. Bisac seems to provide the "expected" hierarchy.
>
> Is anyone aware of any approaches (or better yet code!) that 
> translates lcsh to something like BISAC categories (either BISAC 
> specifically or some other hierarchy/ontology)? General web searching didn't 
> find anything obvious.

There's HILCC, the Hierarchical Interface of LC Classification:

https://www1.columbia.edu/sec/cu/libraries/bts/hilcc/subject_map.html

Bill
--
William Denton ↔  Toronto, Canada ↔  https://www.miskatonic.org/


Re: [CODE4LIB] Google can give you answers, but librarians give you the right answers

2016-04-06 Thread Harper, Cynthia
Amen to the need to help people narrow down, focus their searches; amen to 
BT/NT in LCSH.  I'm working in a smaller subject domain now than I used to, 
theology and religion. It makes the idea of projects like mining seminary 
reserve lists for recommended works, [I really wish ATLA would let us mine book 
reviews], or mst-cited-author lists, or other selection tools aimed at users, 
seem possible.  And how to combine browsing the the classification with what 
LCSH terms are linked there...
Cindy 

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Greg 
Lindahl
Sent: Wednesday, April 06, 2016 11:44 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Google can give you answers, but librarians give you 
the right answers

On Wed, Apr 06, 2016 at 07:42:11AM -0700, Karen Coyle wrote:

> Also, without the links that fuel pagerank, the ranking is very 
> unsatisfactory - cf. Google Book searches, which are often very 
> unsatisfying -- and face it, if Google can't make it work, what are 
> the odds that we can?

Karen,

I wouldn't generalize so far for either web search or book search.
Pagerank is close to useless on the modern web thanks to webspam.
When Google first launched, its focus on anchortext was just as important as 
pagerank. On the books side, properties like publisher authority, book usage, 
and used book sales+prices make nice ranking signals. Book content also 
contains a lot of citations, which can be used to compute impact factors. 
Google Books has only scratched the surface of what's possible for book search 
and discovery.

-- greg

http://blog.archive.org/2016/02/09/how-will-we-explore-books-in-the-21st-century/


Re: [CODE4LIB] Internet of Things

2016-03-31 Thread Harper, Cynthia
+1 Angela.

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Angela 
Galvan
Sent: Thursday, March 31, 2016 12:15 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Internet of Things

Randall speaks to the issue better than I can at the moment.

https://xkcd.com/1053/

-Angela
On Mar 30, 2016 11:11 PM, "Cornel Darden Jr." 
wrote:

> Hello,
>
> Yes, indeed. IoT is not limited to a specific area of librarianship.
> Medical librarians and law librarians should be just as concerned and 
> knowledgeable about such things. Are we suggesting that there is an 
> area of librarianship that does not benefit from the IoT?
>
> As far a IoT being tech-ish, I would argue that librarianship is too.
> Hence, a huge divide in the craft.
>
> What area of librarianship is IoT of specific interest? Please edify.
>
> I apologize if I offended anyone, but as a librarian I am offened by 
> our lack of consistency in a field that is obviously losing tremendous ground.
> I believe this very conversation is one of the reasons why that is such.
>
> We have to grow a backbone eventually and address it.
>
> Every industry and discipline is apart of the information and 
> technology revolution. Why would librarianship, of all fields, believe 
> thay we are somehow not only leaders of the revolution but, have the 
> "option" to be on the back burner.
>
> Yet, I'm a private librarian, and the private sector doesn't afford 
> such complacency in practicing one's craft.
>
> Again, sorry for the rant.
>
> Thanks,
>
> Cornel Darden Jr.
> Chief Information Officer
> Casanova Information Services, LLC
> Office Phone: (779) 205-3105
> Mobile Phone: (708) 705-2945
>
> Sent from my iPhone
>
> > On Mar 30, 2016, at 9:16 PM, Lesli M  wrote:
> >
> > I feel compelled to pipe up about the comment "Very sad that a 
> > librarian
> didn't know what it was."
> >
> > Librarians come in all flavors and varieties. Until I worked in a
> medical library, I had no idea what a systematic review was. I had no 
> idea there was a variety of librarian called "clinical librarian."
> >
> > Do you know the hot new interest for law libraries? Medical libraries?
> Science libraries?
> >
> > The IoT is a specific area of interest. Just like every other 
> > special
> interest out there.
> >
> > Is it really justified to expect all librarians of all flavors and
> varieties to know this very tech-ish thing called IoT?
> >
> > Lesli
>


Re: [CODE4LIB] Deduping linked data in search - was RE: [CODE4LIB] Structured Data Markup on library web sites

2016-03-29 Thread Harper, Cynthia
Hopefully Google will have a means to let libraries/patrons select/deselect 
areas where they will advertise their resources. We're a private institution in 
Alexandria VA. Our resources are pertinent to other people on our single IP 
domain, but less so to others in Alexandria VA.  Maybe they'd use the same 
libraries you choose for Google Scholar link resolvers.

Cindy Harper

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Kevin 
Ford
Sent: Tuesday, March 29, 2016 10:45 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Deduping linked data in search - was RE: [CODE4LIB] 
Structured Data Markup on library web sites

It's probably not safe to say that "all search is local" but there is most 
certainly a strong local component considered for every search. 
For me, every hit on the first page of Google's results for a search for "ice 
cream parlor" is related to Chicago, which is where I executed the search.  A 
search for a book (I chose a current bestseller as a test), however, does not 
return a local hit in the first two pages.  That's not to say it can't happen.  
It might simply (hah! 'simple') be that Google does not know enough about local 
inventory (books available from a local library or in stock at a local 
bookstore) to offer that type of assistance/precision.  While this may seem 
like a theory only, Zepheira's libhub initiative has been trying to make this a 
reality by publishing individual libraries' structured data so that Google can 
make sense of it.  And, at this point, if anyone from Libhub is on this list, 
I'll let you take it from here...

Yours,
Kevin


On 03/29/2016 08:52 AM, Ruth Tillman wrote:
> An off-the-cuff response: I've heard it suggested in talks about 
> Bibframe that just as Google tailors your results based on location 
> (i.e. if I put in "pizza," I'll get pizza places in South Bend, as 
> well as pizza recipes and whatnot), they'd tailor your library results 
> based on location. So if I were in downtown DC, and Googled a book, I 
> would see the DCPL holdings but not Indiana, and vice-versa.
>
> There are maybe 5 or 10 assumptions happening there that other people 
> can spell out better, but it would be a reasonable solution for 
> deduping assuming the metadata pretty much matches.
>
> On Tue, Mar 29, 2016 at 9:40 AM, Harper, Cynthia <char...@vts.edu> wrote:
>
>> Forgive me if I'm confusing schema.org and Bibframe, but I wonder how 
>> Google is going to dedupe all the sources of a given 
>> document/material when many libraries have their holdings in 
>> bibframe?  These sample searches made me wonder about that again.  has this 
>> been discussed?
>>
>> Cindy Harper
>> char...@vts.edu
>> 
>> From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of 
>> Karen Coyle [li...@kcoyle.net]
>> Sent: Thursday, March 24, 2016 10:28 PM
>> To: CODE4LIB@LISTSERV.ND.EDU
>> Subject: Re: [CODE4LIB] Structured Data Markup on library web sites
>>
>> I worked on the addition of schema.org data to the Bryn Mawr 
>> Classical Reviews. Although I advised doing a "before and after" test 
>> to see how it affected retrieval, I lost touch with the folks before 
>> that could happen. However, their reviews do show up fairly high in 
>> Google, around the 3-5th place on page one. Try these searches:
>>
>> how to read a latin poem
>> /From Listeners to Viewers:/
>> /Butrint 4: The Archaeology and Histories of an Ionian Town
>>
>> kc
>>
>> /
>> On 3/22/16 5:44 PM, Jennifer DeJonghe wrote:
>>> Hello,
>>>
>>> I'm looking for examples of library web sites or university web 
>>> sites
>> that are using Structured Data / schema.org to mark up books, 
>> locations, events, etc, on their public web sites or blogs. I'm NOT 
>> really looking for huge linked data projects where large record sets 
>> are marked up, but more simple SEO practices for displaying rich snippets in 
>> search engine results.
>>>
>>> If you have examples of library or university websites doing this,
>> please send me a link!
>>>
>>> Thank you,
>>> Jennifer
>>>
>>> Jennifer DeJonghe
>>> Librarian and Professor
>>> Library and Information Services
>>> Metropolitan State University
>>> St. Paul, MN
>>
>> --
>> Karen Coyle
>> kco...@kcoyle.net http://kcoyle.net
>> m: +1-510-435-8234
>> skype: kcoylenet/+1-510-984-3600
>>
>
>
>


[CODE4LIB] Deduping linked data in search - was RE: [CODE4LIB] Structured Data Markup on library web sites

2016-03-29 Thread Harper, Cynthia
Forgive me if I'm confusing schema.org and Bibframe, but I wonder how Google is 
going to dedupe all the sources of a given document/material when many 
libraries have their holdings in bibframe?  These sample searches made me 
wonder about that again.  has this been discussed?

Cindy Harper
char...@vts.edu

From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Karen Coyle 
[li...@kcoyle.net]
Sent: Thursday, March 24, 2016 10:28 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Structured Data Markup on library web sites

I worked on the addition of schema.org data to the Bryn Mawr Classical
Reviews. Although I advised doing a "before and after" test to see how
it affected retrieval, I lost touch with the folks before that could
happen. However, their reviews do show up fairly high in Google, around
the 3-5th place on page one. Try these searches:

how to read a latin poem
/From Listeners to Viewers:/
/Butrint 4: The Archaeology and Histories of an Ionian Town

kc

/
On 3/22/16 5:44 PM, Jennifer DeJonghe wrote:
> Hello,
>
> I'm looking for examples of library web sites or university web sites that 
> are using Structured Data / schema.org to mark up books, locations, events, 
> etc, on their public web sites or blogs. I'm NOT really looking for huge 
> linked data projects where large record sets are marked up, but more simple 
> SEO practices for displaying rich snippets in search engine results.
>
> If you have examples of library or university websites doing this, please 
> send me a link!
>
> Thank you,
> Jennifer
>
> Jennifer DeJonghe
> Librarian and Professor
> Library and Information Services
> Metropolitan State University
> St. Paul, MN

--
Karen Coyle
kco...@kcoyle.net http://kcoyle.net
m: +1-510-435-8234
skype: kcoylenet/+1-510-984-3600


Re: [CODE4LIB] code4lib mailing list

2016-03-24 Thread Harper, Cynthia
I didn't actually research the tool  (EasyDiscuss). We were assured by the 
person making the annoucement that all conversation could be automatically 
forwarded to email, and that input could be by email as well.  But I don't know.
Cindy

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Paul 
Hoffman
Sent: Thursday, March 24, 2016 9:37 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] code4lib mailing list

On Thu, Mar 24, 2016 at 01:17:52PM +, Harper, Cynthia wrote:
> It was just announced at the Innovative Users group meeting that that 
> listserv is moving to EasyDiscuss 
> http://extensions.joomla.org/extension/easydiscuss.

Here are the so-called benefits of EasyDiscuss:

| Use EasyDiscuss as a forum to manage your customers inquiries, build 
| closer customer loyalty, build credibility, engage in multiple 
| conversations, gather valuable experience from users, or simply build 
| a repository of information that grows over time. EasyDiscuss is as 
| good as Yahoo! Answers!

<URL:http://extensions.joomla.org/extension/easydiscuss>

No mention of e-mail at all.  If I were an Innovative user I would say "No 
thanks!".

Paul.

> -Original Message-
> From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
> Of Jason Bengtson
> Sent: Thursday, March 24, 2016 9:06 AM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] code4lib mailing list
> 
> Yeah, I like option two as well. I could live with option one if need be, but 
> like Matt and Eric I'm not that keen on Google data mining the list.
> 
> Best regards,
> 
> *Jason Bengtson, MLIS, MA*
> Assistant Director, IT Services
> K-State Libraries
> 414 Hale Library
> Manhattan, KS 66506
> 785-532-7450
> jbengt...@ksu.edu
> www.jasonbengtson.com
> 
> On Thu, Mar 24, 2016 at 7:38 AM, Matt Sherman 
> <matt.r.sher...@gmail.com>
> wrote:
> 
> > I have no technical answers to the questions you pose, but I second 
> > Option #2.
> >
> > On Thu, Mar 24, 2016 at 5:29 AM, Eric Lease Morgan <emor...@nd.edu> wrote:
> >
> > > Alas, the Code4Lib mailing list software will most likely need to 
> > > be migrated before the end of summer, and I’m proposing a number 
> > > possible options for the lists continued existence.
> > >
> > > I have been managing the Code4Lib mailing list since its inception 
> > > about twelve years ago. This work has been both a privilege and an 
> > > honor. The list itself runs on top of the venerable LISTSERV 
> > > application and is
> > hosted
> > > by the University of Notre Dame. The list includes about 3,500
> > subscribers,
> > > and traffic very very rarely gets over fifty messages a day. But 
> > > alas, University support for LISTSERV is going away, and I believe 
> > > the
> > University
> > > wants to migrate the whole kit and caboodle to Google Groups.
> > >
> > > Personally, I don’t like the idea of Code4Lib moving to Google Groups.
> > > Google knows enough about me (us), and I don’t feel the need for 
> > > them to know more. Sure, moving to Google Groups includes a large 
> > > convenience factor, but it also means we have less control over 
> > > our own computing environment, let alone our data.
> > >
> > > So, what do we (I) do? I see three options:
> > >
> > >   0. Let the mailing list die — Not really an option, in my opinion
> > >   1. Use Google Groups - Feasible, (probably) reliable, but with 
> > > less control
> > >   2. Host it ourselves - More difficult, more responsibility, all 
> > > but absolute control
> > >
> > > Again, personally, I like Option #2, and I would probably be 
> > > willing to host the list on my one of my computers, (and after a 
> > > bit of DNS
> > trickery)
> > > complete with a code4lib.org domain.
> > >
> > > What do y’all think? If we go with Option #2, then where might we 
> > > host
> > the
> > > list, who might do the work, and what software might we use?
> > >
> > > —
> > > Eric Lease Morgan
> > > Artist- And Librarian-At-Large
> > >
> >

--
Paul Hoffman <p...@flo.org>
Systems Librarian
Fenway Libraries Online
c/o Wentworth Institute of Technology
550 Huntington Ave.
Boston, MA 02115
(617) 442-2384 (FLO main number)


Re: [CODE4LIB] code4lib mailing list

2016-03-24 Thread Harper, Cynthia
It was just announced at the Innovative Users group meeting that that listserv 
is moving to EasyDiscuss http://extensions.joomla.org/extension/easydiscuss.

Cindy Harper

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Jason 
Bengtson
Sent: Thursday, March 24, 2016 9:06 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] code4lib mailing list

Yeah, I like option two as well. I could live with option one if need be, but 
like Matt and Eric I'm not that keen on Google data mining the list.

Best regards,

*Jason Bengtson, MLIS, MA*
Assistant Director, IT Services
K-State Libraries
414 Hale Library
Manhattan, KS 66506
785-532-7450
jbengt...@ksu.edu
www.jasonbengtson.com

On Thu, Mar 24, 2016 at 7:38 AM, Matt Sherman 
wrote:

> I have no technical answers to the questions you pose, but I second 
> Option #2.
>
> On Thu, Mar 24, 2016 at 5:29 AM, Eric Lease Morgan  wrote:
>
> > Alas, the Code4Lib mailing list software will most likely need to be 
> > migrated before the end of summer, and I’m proposing a number 
> > possible options for the lists continued existence.
> >
> > I have been managing the Code4Lib mailing list since its inception 
> > about twelve years ago. This work has been both a privilege and an 
> > honor. The list itself runs on top of the venerable LISTSERV 
> > application and is
> hosted
> > by the University of Notre Dame. The list includes about 3,500
> subscribers,
> > and traffic very very rarely gets over fifty messages a day. But 
> > alas, University support for LISTSERV is going away, and I believe 
> > the
> University
> > wants to migrate the whole kit and caboodle to Google Groups.
> >
> > Personally, I don’t like the idea of Code4Lib moving to Google Groups.
> > Google knows enough about me (us), and I don’t feel the need for 
> > them to know more. Sure, moving to Google Groups includes a large 
> > convenience factor, but it also means we have less control over our 
> > own computing environment, let alone our data.
> >
> > So, what do we (I) do? I see three options:
> >
> >   0. Let the mailing list die — Not really an option, in my opinion
> >   1. Use Google Groups - Feasible, (probably) reliable, but with 
> > less control
> >   2. Host it ourselves - More difficult, more responsibility, all 
> > but absolute control
> >
> > Again, personally, I like Option #2, and I would probably be willing 
> > to host the list on my one of my computers, (and after a bit of DNS
> trickery)
> > complete with a code4lib.org domain.
> >
> > What do y’all think? If we go with Option #2, then where might we 
> > host
> the
> > list, who might do the work, and what software might we use?
> >
> > —
> > Eric Lease Morgan
> > Artist- And Librarian-At-Large
> >
>


Re: [CODE4LIB] reearch project about feeling stupid in professional communication

2016-03-22 Thread Harper, Cynthia
+1!

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Eric 
Lease Morgan
Sent: Tuesday, March 22, 2016 6:55 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] reearch project about feeling stupid in professional 
communication

In my humble opinion, what we have here is a failure to communicate. [1]

Libraries, especially larger libraries, are increasingly made up of many 
different departments, including but not limited to departments such as: 
cataloging, public services, collections, preservation, archives, and 
now-a-days departments of computer staff. From my point of view, these various 
departments fail to see the similarities between themselves, and instead focus 
on their differences. This focus on the differences is amplified by the use of 
dissimilar vocabularies and subdiscipline-specific jargon. This use of 
dissimilar vocabularies causes a communications gap and left unresolved 
ultimately creates animosity between groups. I believe this is especially true 
between the more traditional library departments and the computer staff. This 
communications gap is an impediment to when it comes to achieving the goals of 
librarianship, and any library — whether it be big or small — needs to address 
these issues lest it wastes both its time and money.

For example, the definitions of things like MARC, databases & indexes, 
collections, and services are not shared across (especially larger) library 
departments.

What is the solution to these problems? In my opinion, there are many 
possibilities, but the solution ultimately rests with individuals willing to 
take the time to learn from their co-workers. It rests in the ability to 
respect — not merely tolerate — another point of view. It requires time, 
listening, discussion, reflection, and repetition. It requires getting to know 
other people on a personal level. It requires learning what others like and 
dislike. It requires comparing & contrasting points of view. It demands 
“walking a mile in the other person’s shoes”, and can be accomplished by things 
such as the physical intermingling of departments, cross-training, and simply 
by going to coffee on a regular basis.

Again, all of us working in libraries have more similarities than differences. 
Learn to appreciate the similarities, and the differences will become 
insignificant. The consequence will be a more holistic set of library 
collections and services.

[1] I have elaborated on these ideas in a blog posting - http://bit.ly/1LDpXkc

—
Eric Lease Morgan


[CODE4LIB] LibGuides UX recommendations

2016-02-23 Thread Harper, Cynthia
I noted that last year there was a move to collaborate on recommendations for 
Libguides v2. Did that discussion move off list? Where does it stand now?

Thanks.

Cindy Harper
E-services and periodicals librarian
Virginia Theological Seminary
Bishop Payne Library
3737 Seminary Road
Alexandria VA 22304
char...@vts.edu
703-461-1794


Re: [CODE4LIB] article discovery platforms -- post-implementation assessment?

2016-02-16 Thread Harper, Cynthia
Thanks for the very complete reply! I'll be at IUG and look for your 
presentation there!

Cindy Harper
VTS

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
Thomale, Jason
Sent: Tuesday, February 16, 2016 2:02 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] article discovery platforms -- post-implementation 
assessment?

Hi Cindy,

Sure, and thank you for the compliment! [And, thanks Terry for the pointer to 
our report the other day, as well.]

It is homegrown. Regarding sharing, I'm currently in the process of switching 
that app (and several related projects) from local subversion to the UNT 
Libraries' GitHub space, but they're not there yet. Personally, I'm also in the 
process of making the (long overdue) switch from svn to git, so there's a 
little bit of a mental shift there on my part. My goal is to have everything 
moved over by mid-March, in time for the Innovative User Group conference--I'll 
be presenting about our local Catalog API project, which the bento box uses for 
some of its results, and I'd like to have that whole set of apps available for 
folks to look at, if possible. Though, it is only a few weeks away, so I may 
only manage to get the Catalog API up by then--we'll see.

The bento box consists of two components, a backend API and front-end app.

1. The backend API is implemented in Python Django, using Django REST 
Framework. It provides a simple interface for the front-end app to query and 
does the job of communicating with bento box search targets and returning the 
data needed for display as JSON. New search targets can be added pretty easily 
by extending a base class and overriding methods that define how to query the 
target and how to translate results into the output format. Different targets 
can return different fields, and you can use whatever fields are available in 
views and templates in the front-end app.

2. The front-end is a JS app that uses Backbone.js, RequireJS, and Bootstrap, 
skinned with our website template. It also ties into Google Analytics, with 
lots of custom events to record exactly what results people click on; how often 
"best bets" (from the Summon API) show up, for what queries, and how often 
they're clicked on; how often each target returns no results and for what 
queries, and fun things like that.

Search targets include:

* "Articles" retrieves results from Summon via their API.
* "Books and More" scrapes our III Web catalog (ouch). That's why that search 
tends to perform a little slowly compared to the others.
* "Librarians" hits a Solr instance where we've indexed our LibGuides and staff 
directory data, in an attempt to serve up a relevant librarian for a given 
query.
* "Journals" and "Databases" both hit our homegrown Catalog API.
* "Website" hits our Google Custom Search that services the Library website 
search.
* "Guides" hits our local Solr index of LibGuides.
* "Digital Collections" hits the Solr index for our digital library.
* "Background Materials" is another Summon API search, limited to reference 
materials.

The reason we're scraping our catalog for Books and More instead of pulling 
results from our catalog API is because the results the bento box displays 
needs to mirror what the catalog displays, and attempting to replicate III's 
relevance ranking ourselves wasn't something we wanted to do. Soon we'll be 
looking at possibly implementing a Blacklight layer on top of the same Solr 
index our catalog API uses, at which point we'd switch Books and More so it 
pulls results from the API instead of scraping the III catalog.

I hope that gives you a good idea, and I'm happy to answer any additional 
questions on or off list! Thanks for asking.

Jason Thomale
Resource Discovery Systems Librarian
User Interfaces Unit, UNT Libraries



> -Original Message-
> From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
> Of Harper, Cynthia
> Sent: Tuesday, February 16, 2016 11:01 AM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] article discovery platforms -- post- 
> implementation assessment?
> 
> Jason Thomale - can you tell us about your bento-box application? Is 
> it homegrown?  Is it shareable?  I like it a lot.
> 
> Cindy Harper
> Virginia Theological Seminary
> 
> -Original Message-
> From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
> Of Terry Reese
> Sent: Thursday, February 11, 2016 1:10 PM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] article discovery platforms -- post- 
> implementation assessment?
> 
> I'm not sure if this was exactly what you are looking for -- but a 
> talk derived from this report was given at C4L last year.
> http://digital.library.unt.edu/ark:/67531/metadc499075/
> 
> --tr
> 
> ---

Re: [CODE4LIB] article discovery platforms -- post-implementation assessment?

2016-02-16 Thread Harper, Cynthia
Jason Thomale - can you tell us about your bento-box application? Is it 
homegrown?  Is it shareable?  I like it a lot.

Cindy Harper
Virginia Theological Seminary 

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Terry 
Reese
Sent: Thursday, February 11, 2016 1:10 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] article discovery platforms -- post-implementation 
assessment?

I'm not sure if this was exactly what you are looking for -- but a talk derived 
from this report was given at C4L last year.  
http://digital.library.unt.edu/ark:/67531/metadc499075/

--tr

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Tom 
Cramer
Sent: Thursday, February 11, 2016 12:55 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] article discovery platforms -- post-implementation 
assessment?

I’ve seen many reviews of article discovery platforms (Ebsco Discovery Service, 
Ex Libris Primo Central, Serials Solutions Summon) before an implementations as 
part of a selection process—typically covering things like content coverage, 
API features, integrability with other content / sites. I have not seen any 
assessments done after an implementation.

- what has usage of the article search been like?
- what is the patron satisfaction with the service?
- has anyone gone from blended results to bento box, or bento box to blended, 
based on feedback?
- has anyone switched from one platform to another?
- knowing what you know now, would you do anything different?

I’m particularly interested in the experiences of libraries who use their own 
front ends (like Blacklight or VUFind), and hit the discovery platform via an 
API.

Does anyone have a report or local experience they can share? On list or 
directly?

It would be great to find some shoulders to stand on here. Thanks!

- Tom


[CODE4LIB] Speaking of LibX - slowness?

2015-12-16 Thread Harper, Cynthia
Speaking of LibX. When I had our IT folks test the LibX version on Firefox that 
I had customized about a year ago, they said it slowed their browsers down 
unacceptably.  Has anyone else seen the same behavior?

Cindy Harper


Re: [CODE4LIB] Website KPIs

2015-09-17 Thread Harper, Cynthia
Interesting. I wonder if they have any preliminary data that shows that the 
library features in successful students' lives, and even more interesting if 
they can say that within an at-risk group (one not expected to make use of the 
library? - I don't know how you'd measure that.) that when you do see higher 
library use, you also see higher success rates?  In other words, did we 
intervene or are we just measuring correlations?

Cindy Harper

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Joshua 
Welker
Sent: Thursday, September 17, 2015 11:52 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Website KPIs

Thanks. That is a helpful start. So in that case the KPI is the number of 
interactions with the library per student?

Josh Welker
Information Technology Librarian
James C. Kirkpatrick Library
University of Central Missouri
Warrensburg, MO 64093
JCKL 2260
660.543.8022


On Wed, Sep 16, 2015 at 5:44 PM, Will Martin  wrote:

> The University of Minnesota has a fairly intricate process for 
> recording patron interactions with their library that yields very 
> detailed information of the sort you're looking for.  For example, 
> they can tell you
> -- based on statistically significant data -- the exact amount by 
> which a student's GPA rises on average for each point of contact with the 
> library.
> I've been working (slowly) towards doing the same kind of thing at my 
> institution.
>
> In brief, they log personally identifiable information about patron 
> interactions.  Say Sally Student checks out Moby Dick.  They would log 
> her name, student number, and the type of activity -- "checked out a 
> book", or "accessed a database" or "logged into a lab computer" and so 
> on.  Then, each year, they package up that data and send it to the Office of
> Institutional Research.   The OIR connects all of the student library data
> with their student records, and conducts statistical analysis on it, 
> focusing on measures of student success.
>
> They've published some aggregate results.  The person to talk to at 
> UMN about this is Shane Nackerud.
>
> This may be larger than you're looking for, because it touches on 
> overall library performance rather than just the website.  But you did 
> ask for big picture stuff.
>
> Hope this helps.
>
> Will Martin
>
> Chester Fritz Library
> University of North Dakota
>
>
> On 2015-09-16 10:50, Joshua Welker wrote:
>
>> We are in the middle of a large strategic alignment effort at our 
>> university. A big part of that is developing KPIs (key performance
>> indicators) to use as a benchmark for self-assessment and budget 
>> allocation. The goal is to develop "scorecards" of sorts to help us 
>> track our success.
>>
>> Our website and other web platforms are of vital importance to us, 
>> but I really don't know what would make good KPIs to help us evaluate 
>> them. We collect loads of website usage data, but I don't know what 
>> kind of metrics could serve as a scorecard. Looking at raw sessions 
>> and pageviews is simple but not particularly meaningful.
>>
>> There are two ways to approach KPIs. There is a data-based approach 
>> that correlates performance with data and then just tracks the data, 
>> like pageviews. Then there is an outcomes-based approach that is more 
>> qualitative in nature and simply states the outcome we want to 
>> achieve, and then a variety of types of data are examined to 
>> determine whether we are achieving the outcome.
>>
>> Long story short, I am curious about how other libraries assess the 
>> success or failure of their websites. I am not looking for usability 
>> testing strategies. I am thinking more big picture. Any help is 
>> appreciated.
>>
>> Josh Welker
>> Information Technology Librarian
>> James C. Kirkpatrick Library
>> University of Central Missouri
>> Warrensburg, MO 64093
>> JCKL 2260
>> 660.543.8022
>>
>


[CODE4LIB] Worldshare WSDL/APIs

2015-09-15 Thread Harper, Cynthia
Date: Tue, 15 Sep 2015 19:37:56 -0400
Subject: Worldshare ILL APIs
Hi - I'm at an institution that subscribes only to OCLC WMS ILL and
Cataloging, no Discovery or ILLIAD, and I'm trying to work out the cheapest
and easiest way to get openurl bib data into our ILL review file.

Came upon this article[1]  that describes using WSDL to create requests in
the old Worldcat Resource Sharing:
I can't find any APIs listed on te Worldcat Developers Network that
describe adding an ILL request.  Is the WSDL method still available?  Any
suggestions (besides subscribing to WMS Discovery)?

Cindy Harper




[1] Josep
-
Manuel Rodríguez
-
Gairín Marta Somoza
-
Fernández , (2014),"Web services to
link interlibrary software with OCLC WorldSh
are", Library Hi Tech, Vol. 32 Iss 3 pp. 483
-
494
http://dx.doi.org/10.1108/LHT
-
12
-
2013
-
0158


[CODE4LIB] searching planet code4lib

2015-09-15 Thread Harper, Cynthia
BTW - I deduce that the search box on the code4lib website doesn't search 
planetcode4lib - that would be nice.

Cindy


Re: [CODE4LIB] Processing Circ data

2015-08-06 Thread Harper, Cynthia
I have compacted the database, and I'm using the Group By SQL query. I think I 
actually am hitting the 2GB limit, because of all the data I have for each row. 
I'm wondering if having added a field for reserves history notes, that that's 
treated as a fixed-length field for every record, rather than variable length, 
and just appearing for the small number of records that have been put on 
reserve.  I suppose if I exported my data in two tables - bib and item data, 
the database would be much more efficient than the flat-file approach I've been 
using.  Time to turn the mind back on, rather than just taking the lazy 
approach every time...

Cindy

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Kevin 
Ford
Sent: Wednesday, August 05, 2015 5:16 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Processing Circ data

On the surface, your difficulties suggest you may need look at a few 
optimization tactics. Apologies if these are things you've already considered 
and addressed - just offering a suggestion.

This page [1] is for Access 2003 but the items under Improve query 
performance should apply - I think - to newer versions also.  I'll draw 
specific attention to 1) Compacting the database; 2) making sure you have an 
index set up on the bib record number field and number of circs field; and 3) 
make sure you are using hte Group by sql syntax [2].

Now, I'm not terribly familiar with Access so I can't actually help you with 
point/click instructions, but the above are common 'gotchas' that could be a 
problem regardless of RDBMS.

Yours,
Kevin

[1] https://support.microsoft.com/en-us/kb/209126
[2] http://www.w3schools.com/sql/sql_groupby.asp



On 8/5/15 4:01 PM, Harper, Cynthia wrote:
 Well, I guess it could be bad data, but I don't know how to tell. I think 
 I've done more than this before.

 I have a Find duplicates query that groups by bib record number.  That 
 query seemed to take about 40 minutes to process. Then I added a criterion to 
 limit to only records that had 0 circs this year. That query displays the 
 rotating cursor, then says Not Responding, then the cursor, and loops 
 through that for hours.  Maybe I can find the Access bad data, but I'd be 
 glad to find a more modern data analysis software.  My db is 136,256 kb.  But 
 adding that extra query will probably put it over the 2GB mark.  I've tried 
 extracting to a csv, and that didn't work. Maybe I'll try a Make table to a 
 separate db.

 Or the OpenRefine suggestion sounds good too.

 Cindy Harper

 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
 Of Kevin Ford
 Sent: Wednesday, August 05, 2015 4:23 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Processing Circ data

 Hi Cindy,

 This doesn't quite address your issue, but, unless you've hit the 2 GB Access 
 size limit [1], Access can handle a good deal more than 250,000 item records 
 (rows, yes?) you cited.

 What makes you think you've hit the limit?  Slowness, something else?

 All the best,
 Kevin

 [1]
 https://support.office.com/en-us/article/Access-2010-specifications-1e
 521481-7f9a-46f7-8ed9-ea9dff1fa854





 On 8/5/15 3:07 PM, Harper, Cynthia wrote:
 Hi all. What are you using to process circ data for ad-hoc queries.  I 
 usually extract csv or tab-delimited files - one row per item record, with 
 identifying bib record data, then total checkouts over the given time 
 period(s).  I have been importing these into Access then grouping them by 
 bib record. I think that I've reached the limits of scalability for Access 
 for this project now, with 250,000 item records.  Does anyone do this in R?  
 My other go-to- software for data processing is RapidMiner free version.  Or 
 do you just use MySQL or other SQL database?  I was looking into doing it in 
 R with RSQLite (just read about this and sqldf  
 http://www.r-bloggers.com/make-r-speak-sql-with-sqldf/ ) because ...  I'm 
 rusty enough in R that if anyone will give me some start-off data import 
 code, that would be great.

 Cindy Harper
 E-services and periodicals librarian
 Virginia Theological Seminary
 Bishop Payne Library
 3737 Seminary Road
 Alexandria VA 22304
 char...@vts.edumailto:char...@vts.edu
 703-461-1794



Re: [CODE4LIB] Processing Circ data

2015-08-06 Thread Harper, Cynthia
I did just bring in my own laptop to see if my problem is unique to my work 
computer.  I actually have used Amazon AWS, and yes, that might be the best 
option.  I've been looking into why my MSAccess job is limited to 25% of my CPU 
time - Maybe Access just can't use multiprocessors.  I'm going to investigate 
SLQite and OpenRefine on my presonal laptop.

Thanks all!
Cindy Harper

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Kyle 
Banerjee
Sent: Thursday, August 06, 2015 12:34 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Processing Circ data

On Wed, Aug 5, 2015 at 1:07 PM, Harper, Cynthia char...@vts.edu wrote:

 Hi all. What are you using to process circ data for ad-hoc queries.  I 
 usually extract csv or tab-delimited files - one row per item record, 
 with identifying bib record data, then total checkouts over the given 
 time period(s).  I have been importing these into Access then grouping 
 them by bib record. I think that I've reached the limits of 
 scalability for Access for this project now, with 250,000 item 
 records.  Does anyone do this in R?  My other go-to- software for data 
 processing is RapidMiner free version.  Or do you just use MySQL or 
 other SQL database?  I was looking into doing it in R with RSQLite 
 (just read about this and sqldf 
 http://www.r-bloggers.com/make-r-speak-sql-with-sqldf/ ) because I'm sure my 
 IT department will be skeptical of letting me have MySQL on my desktop.
 (I've moved into a much more users-don't-do-real-computing kind of 
 environment).  I'm rusty enough in R that if anyone will give me some 
 start-off data import code, that would be great.


As has been mentioned already, it's worth investigating whether OpenRefine or 
sqllite are options for you. If not, I'd be inclined to explore solutions that 
don't rely on your local IT dept.

It's so easy to spend far more time going through approval, procurement, and 
then negotiating local IT security/policies than actually working that it pays 
to do a lot of things on the cloud. There are many services out there, but I 
like Amazon for occasional need things because you can provision anything you 
want in minutes and they're stupid cheap. If all you need is mysql for a few 
minutes now and then, just pay for Relational Database Services. If you'd 
rather have a server and run mysql off it, get an EBS backed EC2 instance (the 
reason to go this route rather than instance store is improved IO and your data 
is all retained if you shut off the server without taking a snapshot). 
Depending on your usage, bills of less than a buck a month are very doable. If 
you need something that runs 24x7, other routes will probably be more 
attractive. Another option is to try the mysql built into cheapo web hosting 
accounts like bluehost, though you might find that your disk IO gets yo!
 u throttled. But it might be worth a shot.

If doing this work on your desktop is acceptable (i.e. other people don't need 
access to this service), you might seriously consider just doing it on a 
personal laptop that you can install anything you want on. In addition to 
mysql, you can also install VirtualBox which is a great environment for 
provisioning servers that you can export to other environments or even carry 
around on your cell phone.

With regards to some of the specific issues you bring up, 40 minutes for a 
query on a database that size is insane which indicates the tool you have is 
not up for the job. Because of the way databases store info, performance 
degrades on a logarthmic (rather than linear) basis on indexed data. In plain 
English, this means even queries on millions of records take surprisingly 
little power. Based on what you've described, changing a field from variable to 
fixed might not save you any space and could even increase it depending on what 
you have. In any case, the difference won't be worth worrying about.

Whatever solution you go with, I'd recommend learning to provision yourself 
resources when you can find some time. Work is hard enough when you can't get 
the resources you need. When you can simply assign them to yourself, the tools 
you need are always at hand so life gets much easier and more fun.

kyle


[CODE4LIB] Processing Circ data

2015-08-05 Thread Harper, Cynthia
Hi all. What are you using to process circ data for ad-hoc queries.  I usually 
extract csv or tab-delimited files - one row per item record, with identifying 
bib record data, then total checkouts over the given time period(s).  I have 
been importing these into Access then grouping them by bib record. I think that 
I've reached the limits of scalability for Access for this project now, with 
250,000 item records.  Does anyone do this in R?  My other go-to- software for 
data processing is RapidMiner free version.  Or do you just use MySQL or other 
SQL database?  I was looking into doing it in R with RSQLite (just read about 
this and sqldf  http://www.r-bloggers.com/make-r-speak-sql-with-sqldf/ ) 
because I'm sure my IT department will be skeptical of letting me have MySQL on 
my desktop.  (I've moved into a much more users-don't-do-real-computing kind of 
environment).  I'm rusty enough in R that if anyone will give me some start-off 
data import code, that would be great.

Cindy Harper
E-services and periodicals librarian
Virginia Theological Seminary
Bishop Payne Library
3737 Seminary Road
Alexandria VA 22304
char...@vts.edumailto:char...@vts.edu
703-461-1794


Re: [CODE4LIB] Processing Circ data

2015-08-05 Thread Harper, Cynthia
Well, I guess it could be bad data, but I don't know how to tell. I think I've 
done more than this before.

I have a Find duplicates query that groups by bib record number.  That query 
seemed to take about 40 minutes to process. Then I added a criterion to limit 
to only records that had 0 circs this year. That query displays the rotating 
cursor, then says Not Responding, then the cursor, and loops through that for 
hours.  Maybe I can find the Access bad data, but I'd be glad to find a more 
modern data analysis software.  My db is 136,256 kb.  But adding that extra 
query will probably put it over the 2GB mark.  I've tried extracting to a csv, 
and that didn't work. Maybe I'll try a Make table to a separate db.

Or the OpenRefine suggestion sounds good too.

Cindy Harper

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Kevin 
Ford
Sent: Wednesday, August 05, 2015 4:23 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Processing Circ data

Hi Cindy,

This doesn't quite address your issue, but, unless you've hit the 2 GB Access 
size limit [1], Access can handle a good deal more than 250,000 item records 
(rows, yes?) you cited.

What makes you think you've hit the limit?  Slowness, something else?

All the best,
Kevin

[1]
https://support.office.com/en-us/article/Access-2010-specifications-1e521481-7f9a-46f7-8ed9-ea9dff1fa854





On 8/5/15 3:07 PM, Harper, Cynthia wrote:
 Hi all. What are you using to process circ data for ad-hoc queries.  I 
 usually extract csv or tab-delimited files - one row per item record, with 
 identifying bib record data, then total checkouts over the given time 
 period(s).  I have been importing these into Access then grouping them by bib 
 record. I think that I've reached the limits of scalability for Access for 
 this project now, with 250,000 item records.  Does anyone do this in R?  My 
 other go-to- software for data processing is RapidMiner free version.  Or do 
 you just use MySQL or other SQL database?  I was looking into doing it in R 
 with RSQLite (just read about this and sqldf  
 http://www.r-bloggers.com/make-r-speak-sql-with-sqldf/ ) because ...  I'm 
 rusty enough in R that if anyone will give me some start-off data import 
 code, that would be great.

 Cindy Harper
 E-services and periodicals librarian
 Virginia Theological Seminary
 Bishop Payne Library
 3737 Seminary Road
 Alexandria VA 22304
 char...@vts.edumailto:char...@vts.edu
 703-461-1794



Re: [CODE4LIB] Regex Question

2015-07-08 Thread Harper, Cynthia
I like this regex add-in for Excel: 
http://www.codedawn.com/index/new-excel-add-in-regex-find-replace
Cindy Harper

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Kyle 
Banerjee
Sent: Tuesday, July 07, 2015 6:22 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Regex Question

For clarity, Word does regex, not just wildcards.  It's not quite as complete 
as what you'd get with some other environments such as OpenOffice Writer since 
matching is lazy rather than greedy which can be a big deal depending on what 
you're doing and there are a couple other catches -- notably no support for | 
-- but it's reasonably powerful. There is no regexp capability in Excel unless 
you're willing to use VBA.

kyle

On Tue, Jul 7, 2015 at 1:10 PM, Gordon, Bonnie bgor...@rockarch.org wrote:

 OpenOffice Writer (or a similar program) may be useful for this. It 
 would allow you to search by format while using a more controlled 
 regular expression than MS Word's wildcards.

 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
 Of Matt Sherman
 Sent: Tuesday, July 07, 2015 12:45 PM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Regex Question

 Thanks everyone, this really helps.  I'll have to work out the 
 italicized stuff, but this gets me much closer.

 On Tue, Jul 7, 2015 at 12:43 PM, Kyle Banerjee 
 kyle.baner...@gmail.com
 wrote:

  Y'all are doing this the hard way. Word allows regex replacements as 
  well as format based criteria.
 
  For this particular use case:
 
 1. Open the find/replace dialog (CTL+H)
 2. In the Find what box, put (*) -- make sure the option for Use
 Wildcards is selected, and for the format, specify italic
 3. For theReplace box, just put \1 and specify All caps
 
  And you're done
 
  kyle
 
  On Tue, Jul 7, 2015 at 9:32 AM, Thomas Krichel kric...@openlib.org
  wrote:
 
 Eric Phetteplace writes
  
You can match a string of all caps letters like [A-Z]
  
 This works if you are limited to English. But in a multilingual
 setting, you need to watch out for other uppercases, such as
 крихель vs КРИХЕЛЬ. It then depends in the unicode implementation
 of your regex application. In Perl, for example, you would use
 [[:upper:]].
  
  
   --
  
 Cheers,
  
 Thomas Krichel  http://openlib.org/home/krichel
 skype:thomaskrichel
  
 



Re: [CODE4LIB] Bibframe and FRBRization

2015-06-05 Thread Harper, Cynthia
Thanks!

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Richard 
Wallis
Sent: Thursday, June 04, 2015 9:49 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Bibframe and FRBRization

Bibframe is a vocabulary which will enable the description of Works and their 
Instances - for the purpose of this conversation a Bibframe Instance 
approximates to a FRBR Manifestation.

The kind of service you describe would be one built upon such data.  It could 
be a specific ISBN to Work id lookup tool or it could be a general query tool 
such as a SPARQL server.  So the capability is there once the data [encoded 
using the Bibframe vocabulary] is available in sufficient quantity to make such 
a service viable.

If you are looking for unique Work identifiers (URIs) for related 
manifestations - there are approximately 200 million of them available from 
WorldCat.org.  Currently the best way to get one is by using the OCLC Number 
associated with your manifestation.

As Peter points out you can get more information here:
https://www.oclc.org/developer/develop/linked-data/worldcat-entities/worldcat-work-entity.en.html

If you want to capture the exampleOfWork through code, the Linked Data 
description of a manifestation is available in several serialisation forms,
not just html.   So for example http://www.worldcat.org/oclc/889647468 gets
you html  http://www.worldcat.org/oclc/889647468.ttl gives you Turtle, 
http://www.worldcat.org/oclc/889647468.jsonld gives you JSON, 
http://www.worldcat.org/oclc/889647468.rdf gives you RDF/XML and 
http://www.worldcat.org/oclc/889647468.nt gives you triples any of which you 
can parse to extract the exampleOfWork value from.

~Richard.

On 4 June 2015 at 14:35, Harper, Cynthia char...@vts.edu wrote:

 Thanks - I didn't know about it.
 Cindy

 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
 Of Boheemen, Peter van
 Sent: Thursday, June 04, 2015 9:33 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Bibframe and FRBRization

 Maybe not 'the Bibframe way', but i guess the only existing service 
 for that now would be the Worldcat Work description

 see:



 https://www.oclc.org/developer/develop/linked-data/worldcat-entities/w
 orldcat-work-entity.en.html

 Peter


 
 Van: Code for Libraries CODE4LIB@LISTSERV.ND.EDU namens Harper, 
 Cynthia char...@vts.edu
 Verzonden: donderdag 4 juni 2015 15:12
 Aan: CODE4LIB@LISTSERV.ND.EDU
 Onderwerp: [CODE4LIB] Bibframe and FRBRization

 I am fairly uninformed, but my understanding is that Bibframe is 
 designed to allow distinguishing between work and manifestation as in 
 FRBR.  Will there be some resource that we can send an ISBN for the 
 manifestation, and get back a permanent unique identifier for the work? (I 
 hope I've got my
 FRBR concepts straight)?   And if not now, when?  I'm in the
 thougt-experiment phase of planning a dataset that would be based on 
 works, and looking for a good identifier for it.

 Cindy Harper
 E-services and periodicals librarian
 Virginia Theological Seminary
 Bishop Payne Library
 3737 Seminary Road
 Alexandria VA 22304
 char...@vts.edumailto:char...@vts.edu
 703-461-1794




--
Richard Wallis
Founder, Data Liberate
http://dataliberate.com
Tel: +44 (0)7767 886 005

Linkedin: http://www.linkedin.com/in/richardwallis
Skype: richard.wallis1
Twitter: @rjw


[CODE4LIB] Bibframe and FRBRization

2015-06-04 Thread Harper, Cynthia
I am fairly uninformed, but my understanding is that Bibframe is designed to 
allow distinguishing between work and manifestation as in FRBR.  Will there be 
some resource that we can send an ISBN for the manifestation, and get back a 
permanent unique identifier for the work? (I hope I've got my FRBR concepts 
straight)?   And if not now, when?  I'm in the thougt-experiment phase of 
planning a dataset that would be based on works, and looking for a good 
identifier for it.

Cindy Harper
E-services and periodicals librarian
Virginia Theological Seminary
Bishop Payne Library
3737 Seminary Road
Alexandria VA 22304
char...@vts.edumailto:char...@vts.edu
703-461-1794


Re: [CODE4LIB] Ebook reader app

2015-03-25 Thread Harper, Cynthia
This is evidently what 3M and Overdrive are providing for vendors like III to 
integrate the ebook products with the ILS. The question will be, will those 
APIs be available to individual libraries, not just to ILS vendors?

Cindy Harper
char...@vts.edu

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Erik 
Sandall
Sent: Wednesday, March 25, 2015 12:48 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Ebook reader app

Hi,

If I'm not mistaken, this would require ebook vendors to expand their APIs to 
include the ability to checkout and download. I know of no vendor who does this.

But maybe I'm wrong on both counts...

Erik.

--
Erik Sandall, MLIS
Electronic Services Librarian  Webmaster Mechanics' Institute
57 Post Street
San Francisco, CA 94104
415-393-0111
esand...@milibrary.org


On 3/24/2015 5:31 PM, Becky Schneider wrote:
 Here is an article that explores how such an app could be developed 
 using existing technology:

 http://www.inthelibrarywiththeleadpipe.org/2013/building-a-community-o
 f-readers-social-reading-and-an-aggregated-ebook-reading-app-for-libra
 ries/

 Becky Schneider
 Reference Librarian
 Fauquier County Public Library

 On Tue, Mar 24, 2015 at 7:39 PM, Lauren Magnuson  
 lauren.lpmagnu...@gmail.com wrote:

 I'm curious to know if anyone has explored creating a mobile app for 
 their library that would facilitate downloading /reading library 
 ebooks from multiple library ebook vendors.  I'm envisioning an app 
 that would allow the user to browse ebooks from multiple platforms 
 (e.g., ebrary, EBSCO) and enable downloading and DRM management stuff right 
 in the app.

 I can think of a million roadblocks to creating something like this 
 (publishers, vendors, Adobe, etc.)  But I can also think of a lot of 
 good reasons why this would be very useful (the process to download 
 an ebook from an academic library is, for the most part, ludicrous).

 I know there's Overdrive - and ebrary has it's own app, or whatever, 
 and there are apps like Bluefire that can be used with library ebooks 
 - but something non-platform specific that could conceivably work for 
 multiple library ebook platforms (and be customized by a library to 
 allow the reader to browse collections) is what I have in mind.  I 
 also really dig this Reader's First (http://readersfirst.org/) 
 initiative, which it looks like is wrangling with a lot of the policy 
 /vendor side of things.

 Feel free to contact me off list with any information / ideas / advice.
 This feels like a kind of enormous problem, and a lot of libraries 
 could benefit from a group working toward a technical solution - but 
 perhaps such a group / initiative already exists?

 Thanks in advance,

 Lauren Magnuson
 Systems  Emerging Technologies Librarian, CSU Northridge Development 
 Coordinator, PALNI



Re: [CODE4LIB] linked data question

2015-02-26 Thread Harper, Cynthia
I apologize to both lists for this observation. I don't mean to offend anyone, 
and now it's clear to me that this will potentially do so.  I don't plan on 
commenting further.  I do hold both new technologists and traditional 
librarians in respect - I just may generalize too much in trying to describe to 
myself where the viewpoints differ.

Cindy Harper

-Original Message-
From: Harper, Cynthia 
Sent: Thursday, February 26, 2015 10:22 AM
To: CODE4LIB@LISTSERV.ND.EDU
Cc: 'AUTOCAT'
Subject: RE: [CODE4LIB] linked data question

So the issue being discussed on AUTOCAT was the availability/fault tolerance of 
the database, given that it's spread over numerous remote systems, and I 
suppose local caching and mirroring are the answers there.  

The other issue was skepticism about the feasibility of indexing all these 
remote sources, which led me to thinking about remote indexes, but I see the 
answer is that that's why we won't be using single-site local systems so much, 
but instead using Google-like web-scale indexes.  That's putting pressure on 
the old vision of the library catalog as our database.

Is that a fair understanding?

Cindy Harper
char...@vts.edu 

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Eric 
Lease Morgan
Sent: Thursday, February 26, 2015 9:44 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] linked data question

On Feb 25, 2015, at 3:12 PM, Sarah Weissman seweiss...@gmail.com wrote:

 I am kind of new to this linked data thing, but it seems like the real 
 power of it is not full-text search, but linking through the use of 
 shared vocabularies. So if you have data about Jane Austen in your 
 database and you are using the same URI as other databases to 
 represent Jane Austen in your data (say 
 http://dbpedia.org/resource/Jane_Austen), then you (or rather, your
 software) can do an exact search on that URI in remote resources vs. a 
 fuzzy text search. In other words, linked data is really
^
 supposed to be linked by machines and discoverable through URIs. If 
 you
 
 visit the URL: http://dbpedia.org/page/Jane_Austen you can see a 
 human-interpretable representation of the data a SPARQL endpoint would 
 return for a query for triples {http://dbpedia.org/page/Jane_Austen ?p ?o}.
 This is essentially asking the database for all 
 subject-predicate-object facts it contains where Jane Austen is the subject.


Again, seweissman++  The implementation of linked data is VERY much like the 
implementation of a relational database over HTTP, and in such a scenario, the 
URIs are the database keys. —ELM


Re: [CODE4LIB] linked data question

2015-02-26 Thread Harper, Cynthia
So the issue being discussed on AUTOCAT was the availability/fault tolerance of 
the database, given that it's spread over numerous remote systems, and I 
suppose local caching and mirroring are the answers there.  

The other issue was skepticism about the feasibility of indexing all these 
remote sources, which led me to thinking about remote indexes, but I see the 
answer is that that's why we won't be using single-site local systems so much, 
but instead using Google-like web-scale indexes.  That's putting pressure on 
the old vision of the library catalog as our database.

Is that a fair understanding?

Cindy Harper
char...@vts.edu 

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Eric 
Lease Morgan
Sent: Thursday, February 26, 2015 9:44 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] linked data question

On Feb 25, 2015, at 3:12 PM, Sarah Weissman seweiss...@gmail.com wrote:

 I am kind of new to this linked data thing, but it seems like the real 
 power of it is not full-text search, but linking through the use of 
 shared vocabularies. So if you have data about Jane Austen in your 
 database and you are using the same URI as other databases to 
 represent Jane Austen in your data (say 
 http://dbpedia.org/resource/Jane_Austen), then you (or rather, your 
 software) can do an exact search on that URI in remote resources vs. a 
 fuzzy text search. In other words, linked data is really
^
 supposed to be linked by machines and discoverable through URIs. If 
 you
 
 visit the URL: http://dbpedia.org/page/Jane_Austen you can see a 
 human-interpretable representation of the data a SPARQL endpoint would 
 return for a query for triples {http://dbpedia.org/page/Jane_Austen ?p ?o}.
 This is essentially asking the database for all 
 subject-predicate-object facts it contains where Jane Austen is the subject.


Again, seweissman++  The implementation of linked data is VERY much like the 
implementation of a relational database over HTTP, and in such a scenario, the 
URIs are the database keys. —ELM


Re: [CODE4LIB] linked data question

2015-02-25 Thread Harper, Cynthia
Well, that's my question.  I have the micro view of linked data, I think - it's 
a distribution/self-describing format. But I don't see the big picture.

In the non-techie library world, linked data is being talked about (perhaps 
only in listserv traffic) as if the data (bibliographic data, for instance) 
will reside on remote sites (as a SPARQL endpoint??? We don't know the 
technical implications of that), and be displayed by your local catalog/the 
centralized inter-national catalog by calling data from that remote site. But 
the original question was how the data on those remote sites would be access 
points - how can I start my search by searching for that remote content?  I 
assume there has to be a database implementation that visits that data and 
pre-indexes it for it to be searchable, and therefore the index has to be local 
(or global a la Google or OCLC or its bibliographic-linked-data equivalent). 

All of the above parenthesized or bracketed concepts are nebulous to me.

Cindy

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Sarah 
Weissman
Sent: Tuesday, February 24, 2015 11:02 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] linked data question

 I think Code4libbers will know more about my question about 
 distributed INDEXES?  This is my rudimentary knowledge of linked data 
 - that the indexing process will have to transit the links, and build 
 a local index to the data, even if in displaying the individual 
 records, it goes again out to the source.  But are there examples of 
 distributed systems that have distributed INDEXES?  Or Am I wrong in 
 envisioning an index as a separate entity from the data in today's technology?


I'm a little confused by what you mean by distributed index in a linked data 
context. I assume an index would have to be database implementation specific, 
while data is typically exposed for external consumption via 
implementation-agnostic protocols/formats, like a SPARQL endpoint or a REST 
API. How do you locally index something remote under these constraints?

-Sarah



 Cindy Harper

 -Original Message-
 From: Harper, Cynthia
 Sent: Tuesday, February 24, 2015 1:20 PM
 To: auto...@listserv.syr.edu; 'Williams, Ann'
 Subject: RE: linked data question

 What I haven't read, but what I have wondered about, is whether so 
 far, linked DATA is distributed, but the INDEXES are local?  Is there 
 any example of a system with distributed INDEXES?

 Cindy Harper
 char...@vts.edu

 -Original Message-
 From: AUTOCAT [mailto:auto...@listserv.syr.edu] On Behalf Of Williams, 
 Ann
 Sent: Tuesday, February 24, 2015 10:26 AM
 To: auto...@listserv.syr.edu
 Subject: [ACAT] linked data question

 I was just wondering how linked data will affect OPAC searching and 
 discovery vs. a record with text approach. For example, we have 
 various 856 links to publisher, summary and biographical information 
 in our OPAC as well as ISBNs linking to ContentCafe. But none of that 
 content is discoverable in the OPAC and it requires a further click on 
 the part of patrons (many of whom won't click).

 Ann Williams
 USJ
 --
 **
 *

 AUTOCAT quoting guide: http://www.cwu.edu/~dcc/Autocat/copyright.html
 E-mail AUTOCAT listowners: autocat-requ...@listserv.syr.edu
 Search AUTOCAT archives:  http://listserv.syr.edu/archives/autocat.html
   By posting messages to AUTOCAT, the author does not cede copyright

 **
 *



Re: [CODE4LIB] linked data question

2015-02-24 Thread Harper, Cynthia
Ann - I thought I'd refer part of your question to Code4lib.  

As far as having to click to get the linked data: systems that use linked data 
will be built to transit the link without the user being aware - it's the 
system that will follow that link and find the distributed data, then display 
it as it is programmed to do so.

I think Code4libbers will know more about my question about distributed 
INDEXES?  This is my rudimentary knowledge of linked data - that the indexing 
process will have to transit the links, and build a local index to the data, 
even if in displaying the individual records, it goes again out to the 
source.  But are there examples of distributed systems that have distributed 
INDEXES?  Or Am I wrong in envisioning an index as a separate entity from the 
data in today's technology?

Cindy Harper

-Original Message-
From: Harper, Cynthia 
Sent: Tuesday, February 24, 2015 1:20 PM
To: auto...@listserv.syr.edu; 'Williams, Ann'
Subject: RE: linked data question

What I haven't read, but what I have wondered about, is whether so far, linked 
DATA is distributed, but the INDEXES are local?  Is there any example of a 
system with distributed INDEXES?

Cindy Harper
char...@vts.edu

-Original Message-
From: AUTOCAT [mailto:auto...@listserv.syr.edu] On Behalf Of Williams, Ann
Sent: Tuesday, February 24, 2015 10:26 AM
To: auto...@listserv.syr.edu
Subject: [ACAT] linked data question

I was just wondering how linked data will affect OPAC searching and discovery 
vs. a record with text approach. For example, we have various 856 links to 
publisher, summary and biographical information in our OPAC as well as ISBNs 
linking to ContentCafe. But none of that content is discoverable in the OPAC 
and it requires a further click on the part of patrons (many of whom won't 
click).

Ann Williams
USJ
--
***

AUTOCAT quoting guide: http://www.cwu.edu/~dcc/Autocat/copyright.html
E-mail AUTOCAT listowners: autocat-requ...@listserv.syr.edu
Search AUTOCAT archives:  http://listserv.syr.edu/archives/autocat.html
  By posting messages to AUTOCAT, the author does not cede copyright

***


Re: [CODE4LIB] state of the art in virtual shelf browse?

2015-01-27 Thread Harper, Cynthia
What testimony to what a difference presentation can make!  So much better than 
basically the same functionality, but in a text list, as shown in our old III 
Webpac.

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Cole 
Hudson
Sent: Tuesday, January 27, 2015 9:57 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] state of the art in virtual shelf browse?

Hi Jenn,

Just to add one example more to the mix, we've built a shelf browser based on 
Harvard's Stackview/Stacklife project--adding to it a z39.50 connector and 
organizing results by call number. This search works across all of holdings, 
regardless of the books' locations. (Click the link, then under the Books and 
Media box, click See on Shelf to look at our shelf browser.)

http://library.wayne.edu/quicksearch/#q=the%20hobbit

Also, our code is on Github: https://github.com/WSULib/SVCatConnector

Cole


Re: [CODE4LIB] Help with Catmandu MARC import to CouchDB

2015-01-25 Thread Harper, Cynthia
Thanks Francis!

Here it is. It's probably a Marc-8 file, given that it's output from III. So I 
should probably run it thru a fix?  My first reaction would be to pass it thru 
MARCEdit.

I got the convert to JSON to work after I added the space around the .  But 
that didn't seem to fix the CouchDB issue.

Cindy

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Francis 
Kayiwa
Sent: Sunday, January 25, 2015 4:50 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Help with Catmandu MARC import to CouchDB

On 1/25/15 4:27 PM, Harper, Cynthia wrote:
 Hi - I'm trying to use catmandu to build a copy of my III authorities 
 database in CouchDB, queryable by REST.

 I'm working in Windows 7.

 I've successfully been able to import MARC bib records (from an ebook set) 
 into my database, but I'm failing when trying with the authority records.  
 I'm assuming that might be because of validation and what fields exist in the 
 authority records.
 Here's my error message:
 C:\Perlcatmandu convert MARC to JSON 
 c:\Users\charper\Documents\Authorities\small\msplit.mrc
 No Perl script found in input


Care to share the mrc file to see if I'd get the same results?

Also I would think a space between the  and c:\ would be needed.

Cheers,
./fxk



 Here's a sample MARC authority converted to MARcEDIT .mrk:  (looks 
 like I could have started with a simpler record)

 =LDR  02738cz   2200517n  45 0
 =001  oca00314234\
 =003  OCoLC
 =005  20141107020415.0
 =008  790918n|\azannaabn\\|a\aaa\\
 =010  \\$an  79081704 $zn  90664944
 =040  \\$aDLC$beng$erda$cDLC$dDLC$dDLC$dInU$dUPB$dDLC
 =046  \\$f1285{tilde}$g1349{tilde}$2edtf
 =100  0\$aWilliam,$cof Ockham,$dapproximately 1285-approximately 1349
 =372  \\$aphilosopher
 =375  \\$amale
 =377  \\$alat
 =400  0\$aGuglielmo,$cdi Ockham,$dapproximately 1285-approximately 
 1349
 =400  0\$aGuglielmo,$cd'Occam,$dapproximately 1285-approximately 1349
 =400  0\$aGuilelmus,$cde Occam,$dapproximately 1285-approximately 1349
 =400  0\$aGuilhelmus,$cde Ockam,$dapproximately 1285-approximately 
 1349
 =400  0\$aGuillaume,$cd'Occam,$dapproximately 1285-approximately 1349
 =400  0\$aGuillelmus,$cde Ockham,$dapproximately 1285-approximately 
 1349
 =400  0\$aGulielmus,$cOcchamus,$dapproximately 1285-approximately 1349
 =400  0\$aOccam,$dapproximately 1285-approximately 1349
 =400  1\$aOccam, Guillaume d',$dapproximately 1285-approximately 1349
 =400  1\$aOccam, William,$dapproximately 1285-approximately 1349
 =400  1\$aOccamus, Guilielmus,$dapproximately 1285-approximately 1349
 =400  1\$aOcchamus, Gulielmus,$dapproximately 1285-approximately 1349
 =400  1\$aOckam, Guilhelmus de,$dapproximately 1285-approximately 1349
 =400  1\$aOckham, William,$dapproximately 1285-approximately 1349
 =400  1\$aOckham, William,$dd. ca. 1349$wnnaa
 =400  0\$aOkkam, Uil{softsign}{llig}i{rlig}am,$dapproximately 
 1285-approximately 1349
 =400  1\$aOkk{mllhring}am, William,$dapproximately 1285-approximately 
 1349
 =400  0\$aWilhelm,$cvon Ockham,$dapproximately 1285-approximately 1349
 =400  0\$aWilliam,$cof Occam,$dapproximately 1285-approximately 1349
 =400  0\$aWilliam,$cof Ockham,$dca. 1285-ca. 1349$wnnea
 =400  0\$aWilliam Okk{mllhring}am,$dapproximately 1285-approximately 
 1349
 =670  \\$aPak, C.G. William Okk{mllhring}am {breve}ui saengae wa 
 sasang, 1983:$bt.p. (William Okk{mllhring}am)
 =670  \\$aAicher, O. Wilhelm von Ockham, c1986.
 =670  \\$aInU/Wing STC files$b(usage: Gulielmi Occhami ...)
 =670  \\$aHis Expositio in libros Physicorum Aristotelis, 1985:$bt.p. 
 (Guillelmi de Ockham)
 =670  \\$aThe JFK assassination, 1999:$bt.p. (Occam) p. 4 of cover 
 (William of Ockham; medieval philosopher)
 =670  \\$aFilosofi{llig}i{rlig}a Uil{softsign}{llig}i{rlig}ama Okkama, 2001.
 =670  \\$aEpitome et collectorium ex Occamo circa quatuor Sententiarum 
 libros, 1965.
 =670  \\$aTabule ad diversas huius operis Magistri Guilhelmi de Ockam super 
 quattuor libros Sententiarum annotationes ..., 9-10 Nov. 1495.
 =907  \\$a.a12671617$b11-07-14$c11-07-14$d-$e-$f-

 Any help?

 Thanks in advance.

 Cindy Harper
 Electronic Services and Serials Librarian Virginia Theological 
 Seminary
 3737 Seminary Road
 Alexandria VA 22304
 703-461-1794
 char...@vts.edu


--
Mediocrity finds safety in standardization.
-- Frederick Crane


msplit.mrc
Description: msplit.mrc


Re: [CODE4LIB] Help with Catmandu MARC import to CouchDB

2015-01-25 Thread Harper, Cynthia
Curious. It goes on to a different error (the convert to JSON actually works) 
when my working directory is c:\Users\charper, but not if my working directory 
is c:\Perl.  But my import to CouchDB of the ebook bibs worked in the c:\Perl 
directory.

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Francis 
Kayiwa
Sent: Sunday, January 25, 2015 4:50 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Help with Catmandu MARC import to CouchDB

On 1/25/15 4:27 PM, Harper, Cynthia wrote:
 Hi - I'm trying to use catmandu to build a copy of my III authorities 
 database in CouchDB, queryable by REST.

 I'm working in Windows 7.

 I've successfully been able to import MARC bib records (from an ebook set) 
 into my database, but I'm failing when trying with the authority records.  
 I'm assuming that might be because of validation and what fields exist in the 
 authority records.
 Here's my error message:
 C:\Perlcatmandu convert MARC to JSON 
 c:\Users\charper\Documents\Authorities\small\msplit.mrc
 No Perl script found in input


Care to share the mrc file to see if I'd get the same results?

Also I would think a space between the  and c:\ would be needed.

Cheers,
./fxk



 Here's a sample MARC authority converted to MARcEDIT .mrk:  (looks 
 like I could have started with a simpler record)

 =LDR  02738cz   2200517n  45 0
 =001  oca00314234\
 =003  OCoLC
 =005  20141107020415.0
 =008  790918n|\azannaabn\\|a\aaa\\
 =010  \\$an  79081704 $zn  90664944
 =040  \\$aDLC$beng$erda$cDLC$dDLC$dDLC$dInU$dUPB$dDLC
 =046  \\$f1285{tilde}$g1349{tilde}$2edtf
 =100  0\$aWilliam,$cof Ockham,$dapproximately 1285-approximately 1349
 =372  \\$aphilosopher
 =375  \\$amale
 =377  \\$alat
 =400  0\$aGuglielmo,$cdi Ockham,$dapproximately 1285-approximately 
 1349
 =400  0\$aGuglielmo,$cd'Occam,$dapproximately 1285-approximately 1349
 =400  0\$aGuilelmus,$cde Occam,$dapproximately 1285-approximately 1349
 =400  0\$aGuilhelmus,$cde Ockam,$dapproximately 1285-approximately 
 1349
 =400  0\$aGuillaume,$cd'Occam,$dapproximately 1285-approximately 1349
 =400  0\$aGuillelmus,$cde Ockham,$dapproximately 1285-approximately 
 1349
 =400  0\$aGulielmus,$cOcchamus,$dapproximately 1285-approximately 1349
 =400  0\$aOccam,$dapproximately 1285-approximately 1349
 =400  1\$aOccam, Guillaume d',$dapproximately 1285-approximately 1349
 =400  1\$aOccam, William,$dapproximately 1285-approximately 1349
 =400  1\$aOccamus, Guilielmus,$dapproximately 1285-approximately 1349
 =400  1\$aOcchamus, Gulielmus,$dapproximately 1285-approximately 1349
 =400  1\$aOckam, Guilhelmus de,$dapproximately 1285-approximately 1349
 =400  1\$aOckham, William,$dapproximately 1285-approximately 1349
 =400  1\$aOckham, William,$dd. ca. 1349$wnnaa
 =400  0\$aOkkam, Uil{softsign}{llig}i{rlig}am,$dapproximately 
 1285-approximately 1349
 =400  1\$aOkk{mllhring}am, William,$dapproximately 1285-approximately 
 1349
 =400  0\$aWilhelm,$cvon Ockham,$dapproximately 1285-approximately 1349
 =400  0\$aWilliam,$cof Occam,$dapproximately 1285-approximately 1349
 =400  0\$aWilliam,$cof Ockham,$dca. 1285-ca. 1349$wnnea
 =400  0\$aWilliam Okk{mllhring}am,$dapproximately 1285-approximately 
 1349
 =670  \\$aPak, C.G. William Okk{mllhring}am {breve}ui saengae wa 
 sasang, 1983:$bt.p. (William Okk{mllhring}am)
 =670  \\$aAicher, O. Wilhelm von Ockham, c1986.
 =670  \\$aInU/Wing STC files$b(usage: Gulielmi Occhami ...)
 =670  \\$aHis Expositio in libros Physicorum Aristotelis, 1985:$bt.p. 
 (Guillelmi de Ockham)
 =670  \\$aThe JFK assassination, 1999:$bt.p. (Occam) p. 4 of cover 
 (William of Ockham; medieval philosopher)
 =670  \\$aFilosofi{llig}i{rlig}a Uil{softsign}{llig}i{rlig}ama Okkama, 2001.
 =670  \\$aEpitome et collectorium ex Occamo circa quatuor Sententiarum 
 libros, 1965.
 =670  \\$aTabule ad diversas huius operis Magistri Guilhelmi de Ockam super 
 quattuor libros Sententiarum annotationes ..., 9-10 Nov. 1495.
 =907  \\$a.a12671617$b11-07-14$c11-07-14$d-$e-$f-

 Any help?

 Thanks in advance.

 Cindy Harper
 Electronic Services and Serials Librarian Virginia Theological 
 Seminary
 3737 Seminary Road
 Alexandria VA 22304
 703-461-1794
 char...@vts.edu


--
Mediocrity finds safety in standardization.
-- Frederick Crane


Re: [CODE4LIB] Identifying misshelved items

2015-01-15 Thread Harper, Cynthia
So we have to humanly check the skinny books with labels on the covers?
Cindy

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Becky 
Yoose
Sent: Thursday, January 15, 2015 3:14 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Identifying misshelved items

Hi Cab,

Thoughts on how best to tackle this? And no, shelf-reading while 
scanning
is not an acceptable solution :-)

Awww, but you can shelf read with your phone! http://shelvar.com/ They claim to 
have an inventory part in development, but I am unaware of the ETA of the 
feature. I do know one of the main folks behind the app, though, if you want 
more info.

Thanks,
Becky


--
Becky Yoose
Discovery and Integrated Systems Librarian Grinnell College Libraries

On Thu, Jan 15, 2015 at 1:32 PM, Cab Vinton bibli...@gmail.com wrote:

 We're doing inventory here and would love to combine this with finding 
 items out of call number order. (The inventory process simply updates 
 the datelastseen field.)

 Koha's inventory tool generates an XLS file in the following format 
 (barcodes, too, actually):

   Title Author Call number  The last jihad : Rosenberg, Joel, FIC 
 ROSEN Home repair / Rosenbarg, Liz. FIC ROSEN  Abuse of power / Rosen, 
 Fred. FIC ROSEN  California angel / Rosenberg, Nancy Taylor. FIC ROSEN 
 What we'd ideally like is a programmatic method of:

 1./ identifying items like Home Repair and Abuse of Power, and

 2./ specifying where such misshelved titles are currently located.

 For fiction, we're mostly concerned with authors out of order (i.e., 
 title order *within* the same author can be ignored). For non-fiction, 
 Dewey/ call number order is, of course, the desired result.

 Thoughts on how best to tackle this? And no, shelf-reading while 
 scanning is not an acceptable solution :-)

 My VBA skills are seriously rusty at this point, and there are some 
 complicating factors (e.g,. how to handle to books in a row which are 
 misshelved -- the second book's location should be compared to the 
 last correctly shelved book; see Rosen/ Rosenberg above).

 Has this wheel already been invented?

 Grateful for any  all suggestions!

 Best,

 Cab Vinton, Director
 Plaistow Public Library
 Plaistow, NH



Re: [CODE4LIB] what good books did you read in 2014?

2014-12-09 Thread Harper, Cynthia
Just found _Guilt about the past_ is in EBSCO Academic Complete. 

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
Schwartz, Raymond
Sent: Tuesday, December 09, 2014 3:33 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] what good books did you read in 2014?

Not all in 2014, but some very good books.

Books

Guilt About the Past, Bernhard Schlink
Begin Here: The Forgotten Conditions of Teaching and Learning, Jacques Barzun 
War Is A Force That Gives Us Meaning, Chris Hedges James Tiptree, Jr.: The 
Double Life of Alice B. Sheldon, Julie Phillips Detroit City Is the Place to 
Be: The Afterlife of an American Metropolis, Mark Binelli

/Ray

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
Andromeda Yelton
Sent: Tuesday, December 09, 2014 9:47 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] what good books did you read in 2014?

Hey, code4lib! I bet you consume fascinating media. What good books did you 
read in 2014 that you think your colleagues would like, too?  (And hey, we're 
all digital, so feel free to include movies and video games and so
forth.)

Mine:
http://www.obeythetestinggoat.com/ (O'Reilly book, plus read free online) - a 
book on testing from a Django-centric, front end perspective. *Finally* I get 
how testing works. This book rewrote my brain.

_The Warmth of Other Suns_ - finally got around to reading this magnum opus 
history of the Great Migration, am halfway through, it's amazing. If you're 
looking for some historical context on how we got to Ferguson, Isabel Wilkerson 
has you covered.

_Her_ - Imma let you finish, Citzenfour and Big Hero 6 and LEGO movie and 
Guardians of the Galaxy - you were all good - but I walked out of the theater 
and literally couldn't speak after this one. Plus, funniest throwaway scene 
ever. Almost fell out of my chair.

_Tim's Vermeer_ - wait, no, watch that one too. Weird tinkering genius who 
can't paint obsesses over recreating a Vermeer with startling, physics-driven 
results. Also, Penn Jillette.

--
Andromeda Yelton
Board of Directors, Library  Information Technology Association:
http://www.lita.org
Advisor, Ada Initiative: http://adainitiative.org http://andromedayelton.com 
@ThatAndromeda http://twitter.com/ThatAndromeda


Re: [CODE4LIB] Getty AAT to MARC Authorities

2014-11-24 Thread Harper, Cynthia
Good point - thanks.

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Mark A. 
Matienzo
Sent: Monday, November 24, 2014 12:03 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Getty AAT to MARC Authorities

One caveat - by using the Linked Data versions of the Getty vocabularies, you 
will be bound to the ODC-BY license [0], and as such you will need to determine 
the best way for public attribution of your reuse of the vocabulary data.

[0] http://opendatacommons.org/licenses/by/summary/

Mark

--
Mark A. Matienzo m...@matienzo.org
Director of Technology, Digital Public Library of America

On Mon, Nov 24, 2014 at 11:49 AM, Harper, Cynthia char...@vts.edu wrote:


 http://www.getty.edu/vow/AATFullDisplay?find=alblogic=ANDnote=subje
 ctid=300210450

 Is an example.

 I'd need to check with our Tech Services head to see what fields we want
 to retain and map to MARC.   We'd restrict it to Topical subject
 authorities - maybe that's all there is in AAT, I don't know.  I 
 expect we just want to preferred term and the other Terms.  Maybe related 
 terms
 too.

 Cindy Harper

 -Original Message-
 From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf 
 Of Terry Reese
 Sent: Monday, November 24, 2014 11:27 AM
 To: CODE4LIB@LISTSERV.ND.EDU
 Subject: Re: [CODE4LIB] Getty AAT to MARC Authorities

 Cynthia,

 MarcEdit might.  I have been working on a proof of concept JSON 
 converter that you can teach (i.e., template based).  Its not finished 
 or ready for folks to work with, but if I had some sample records and 
 you had some interest in working through the process with me, I'd be 
 interested in seeing if the process works like I think it might (and 
 be straightforward enough for non-programmers to use)

 --tr

 On Fri, Nov 21, 2014 at 5:04 PM, Eric Phetteplace phett...@gmail.com
 wrote:

  Ooo, this is a good idea! Please share if you end up with something 
  that works.
 
  Best,
  Eric
 
  On Fri, Nov 21, 2014 at 1:11 PM, Harper, Cynthia char...@vts.edu
 wrote:
 
   Does anyone have a method of taking JSON, RDF, etc., from the 
   Getty Art and Architecture Thesaurus, crosswalking it to MARC and 
   importing it into your old-fashioned ILS using the OCLC export port?
   - Wait - can MARCedit
  do
   this?
  
   Any tips are welcome.
  
   Cindy Harper
   Electronic Services and Serials Librarian Virginia Theological 
   Seminary
   3737 Seminary Road
   Alexandria VA 22304
   703-461-1794
   char...@vts.edu
  
 



[CODE4LIB] Getty AAT to MARC Authorities

2014-11-21 Thread Harper, Cynthia
Does anyone have a method of taking JSON, RDF, etc., from the Getty Art and 
Architecture Thesaurus, crosswalking it to MARC and importing it into your 
old-fashioned ILS using the OCLC export port? - Wait - can MARCedit do this?

Any tips are welcome.

Cindy Harper
Electronic Services and Serials Librarian
Virginia Theological Seminary
3737 Seminary Road
Alexandria VA 22304
703-461-1794
char...@vts.edu


[CODE4LIB] Wednesday afternoon reverie

2014-10-22 Thread Harper, Cynthia
So I'm deleting all the Bisac subject headings (650_7|2bisacsh) from our ebook 
records - they were deemed not to be useful, especially as it would entail a 
for-fee indexing change to make them clickable.  But I'm thinking if we someday 
have a discovery system, they'll be useful as a means for broader-to-narrower 
term browsing that won't require translation to English, as would call number 
ranges.

As I watch the system slowly chunk through them, I think about how library 
collections and catalogs facilitate jumping to the most specific subjects, but 
browsing is something of an afterthought.

What if we could set a ranking score for the importance of an item in 
browsing, based on circulation data - authors ranked by the relative 
circulation of all their works, same for series, latest edition of a 
multi-edition work given higher ranking, etc.?  Then have a means to set the 
threshold importance value you want to look at, and browse through these 
general Bisac terms, or the classification?  Or have a facet for importance 
threshold.  I see Bisac sometimes has a broadness/narrowness facet (overview) 
- wonder how consistently that's applied, enough to be useful?

Guess those rankings would be very expensive in compute time.

Well, back to the deletions.

Cindy Harper
Electronic Services and Serials Librarian
Virginia Theological Seminary
3737 Seminary Road
Alexandria VA 22304
703-461-1794
char...@vts.edu


[CODE4LIB] IFTTT and barcodes

2014-09-10 Thread Harper, Cynthia
Now that someone has mentioned IFTTT, I'm reading up on it and wonder if it 
could make this task possible:

One of my tasks is copy cataloging. I'm only authorized to do LC copy, which 
involves opening the record (already downloaded in the acq process), and 
checking to see that 490 doesn't exist (I can't handle series), and looking for 
DLC in the 040 |a and |c.
It's discouraging when I take 10 books back to my desk from the cataloging 
shelf, and all 10 are not eligible for copy cataloging.

S...  could I take my phone to the cataloging shelf, use IFTTT to scan my 
ISBN, search in the III Webpac, look at the MARc record and tell me whether 
it's LC copy?

Empower the frontline workers! :)

Cindy Harper
Electronic Services and Serials Librarian
Virginia Theological Seminary
3737 Seminary Road
Alexandria VA 22304
703-461-1794
char...@vts.edu


Re: [CODE4LIB] IFTTT and barcodes

2014-09-10 Thread Harper, Cynthia
This is not in the stacks - it's in the shelves in the cataloging office. 
WiFi's fine there.

Now what I didn't mention to you all is that I could just suggest that we 
shelve our cataloging queue with LC copy separated from non-DLC, but then how 
would I learn a new trick?  (And I'd have to ask the acq librarian to do the 
sorting.) 
- Cindy

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Ian 
Walls
Sent: Wednesday, September 10, 2014 3:49 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] IFTTT and barcodes

I don't think IFTTT is the right tool, but the basic idea is sound.

With a spot of custom scripting on some server somewhere, one could take in an 
ISBN, lookup via the III WebPac, assess eligibility conditions, then return yes 
or no.  Barcode Scanner on Android has the ability to do custom search URLs, so 
if your yes/no script can accept URL params, then you should be all set.  

Barring a script, just a lookup of the MARC record may be possible, and if it 
was styled in a mobile-friendly manner, perhaps you could quickly glean whether 
it's okay or not for copy cataloging.

Side question: is there connectivity in the stacks for doing this kind of 
lookup?  I know in my library, that's not always the case.


-Ian

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Riley 
Childs
Sent: Wednesday, September 10, 2014 3:31 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] IFTTT and barcodes

Webhooks via the WordPress channel?

Riley Childs
Senior
Charlotte United Christian Academy
Library Services Administrator
IT Services
(704) 497-2086
rileychilds.net
@rowdychildren

From: Tara Robertsonmailto:trobert...@langara.bc.ca
Sent: ‎9/‎10/‎2014 3:03 PM
To: CODE4LIB@LISTSERV.ND.EDUmailto:CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] IFTTT and barcodes

Hi,

I don't think this is possible using IFTTT right now as existing channels don't 
exist to create a recipe. I'm trying to think of what those channels would be 
and can't quite...I don't think IFTTT is the best tool for this task.

What ILS are you using? Could you hook a barcode scanner up to a tablet and 
scan, then check the MARC...nah, that's seeming almost as time consuming as 
taking it to your desk (depending on how far your desk is).
I recall at an Evergreen hackfest that someone was tweaking the web interface 
for an inventory type exercise, where it would show red or green depending on 
some condition.

Cheers,
Tara

On 10/09/2014 11:52 AM, Harper, Cynthia wrote:
 Now that someone has mentioned IFTTT, I'm reading up on it and wonder 
 if
it could make this task possible:

 One of my tasks is copy cataloging. I'm only authorized to do LC copy,
which involves opening the record (already downloaded in the acq process), and 
checking to see that 490 doesn't exist (I can't handle series), and looking for 
DLC in the 040 |a and |c.
 It's discouraging when I take 10 books back to my desk from the 
 cataloging
shelf, and all 10 are not eligible for copy cataloging.

 S...  could I take my phone to the cataloging shelf, use IFTTT to 
 scan
my ISBN, search in the III Webpac, look at the MARc record and tell me whether 
it's LC copy?

 Empower the frontline workers! :)

 Cindy Harper
 Electronic Services and Serials Librarian Virginia Theological 
 Seminary
 3737 Seminary Road
 Alexandria VA 22304
 703-461-1794
 char...@vts.edu


--

Tara Robertson

Accessibility Librarian, CAPER-BC http://caperbc.ca/ T  604.323.5254 F
604.323.5954 trobert...@langara.bc.ca
mailto:tara%20robertson%20%3ctrobert...@langara.bc.ca%3E

Langara. http://www.langara.bc.ca

100 West 49th Avenue, Vancouver, BC, V5Y 2Z6


Re: [CODE4LIB] IFTTT and barcodes

2014-09-10 Thread Harper, Cynthia
Oh, cool! - I just reread the suggestion about barcode scanner on Android. 
That's the key...
Thanks!
- Cindy

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Ian 
Walls
Sent: Wednesday, September 10, 2014 3:49 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] IFTTT and barcodes

I don't think IFTTT is the right tool, but the basic idea is sound.

With a spot of custom scripting on some server somewhere, one could take in an 
ISBN, lookup via the III WebPac, assess eligibility conditions, then return yes 
or no.  Barcode Scanner on Android has the ability to do custom search URLs, so 
if your yes/no script can accept URL params, then you should be all set.  

Barring a script, just a lookup of the MARC record may be possible, and if it 
was styled in a mobile-friendly manner, perhaps you could quickly glean whether 
it's okay or not for copy cataloging.

Side question: is there connectivity in the stacks for doing this kind of 
lookup?  I know in my library, that's not always the case.


-Ian

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Riley 
Childs
Sent: Wednesday, September 10, 2014 3:31 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] IFTTT and barcodes

Webhooks via the WordPress channel?

Riley Childs
Senior
Charlotte United Christian Academy
Library Services Administrator
IT Services
(704) 497-2086
rileychilds.net
@rowdychildren

From: Tara Robertsonmailto:trobert...@langara.bc.ca
Sent: ‎9/‎10/‎2014 3:03 PM
To: CODE4LIB@LISTSERV.ND.EDUmailto:CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] IFTTT and barcodes

Hi,

I don't think this is possible using IFTTT right now as existing channels don't 
exist to create a recipe. I'm trying to think of what those channels would be 
and can't quite...I don't think IFTTT is the best tool for this task.

What ILS are you using? Could you hook a barcode scanner up to a tablet and 
scan, then check the MARC...nah, that's seeming almost as time consuming as 
taking it to your desk (depending on how far your desk is).
I recall at an Evergreen hackfest that someone was tweaking the web interface 
for an inventory type exercise, where it would show red or green depending on 
some condition.

Cheers,
Tara

On 10/09/2014 11:52 AM, Harper, Cynthia wrote:
 Now that someone has mentioned IFTTT, I'm reading up on it and wonder 
 if
it could make this task possible:

 One of my tasks is copy cataloging. I'm only authorized to do LC copy,
which involves opening the record (already downloaded in the acq process), and 
checking to see that 490 doesn't exist (I can't handle series), and looking for 
DLC in the 040 |a and |c.
 It's discouraging when I take 10 books back to my desk from the 
 cataloging
shelf, and all 10 are not eligible for copy cataloging.

 S...  could I take my phone to the cataloging shelf, use IFTTT to 
 scan
my ISBN, search in the III Webpac, look at the MARc record and tell me whether 
it's LC copy?

 Empower the frontline workers! :)

 Cindy Harper
 Electronic Services and Serials Librarian Virginia Theological 
 Seminary
 3737 Seminary Road
 Alexandria VA 22304
 703-461-1794
 char...@vts.edu


--

Tara Robertson

Accessibility Librarian, CAPER-BC http://caperbc.ca/ T  604.323.5254 F
604.323.5954 trobert...@langara.bc.ca
mailto:tara%20robertson%20%3ctrobert...@langara.bc.ca%3E

Langara. http://www.langara.bc.ca

100 West 49th Avenue, Vancouver, BC, V5Y 2Z6


[CODE4LIB] FW: [CODE4LIB] Open-source batch authority control?

2014-09-08 Thread Harper, Cynthia
-Original Message-
From: Harper, Cynthia 
Sent: Friday, September 05, 2014 8:25 PM
To: Will Martin
Subject: RE: [CODE4LIB] Open-source batch authority control?

Thanks Will. I know about MARCEdit.  What I'm concerned about is not so much 
technique, but a source of authority records. Obviously, thre's LC Authorities, 
but what I don't know is if those records all have RDA equivalents in that 
database, or what the advantages of the commercial authority databases are.  Of 
course, we can get authority records from OCLC if we could work out some method 
of selecting a batch of macthing authorities that doesn't cost a lot extra.  We 
have III automated authorities, so one authority record can updateall the bibs 
if it has a 4xx that match their 1xx.What I need is an overview of what my 
options are for selecting a batch of authority records from a good source.

Cindy

From: Will Martin [w...@will-martin.net]
Sent: Friday, September 05, 2014 5:36 PM
To: Code for Libraries
Cc: Harper, Cynthia
Subject: Re: [CODE4LIB] Open-source batch authority control?

I'm not super-familiar with cataloging software, but would MarcEdit do the 
trick?

http://marcedit.reeset.net/

I know it's designed explicitly for batch editing of MARC records, I just don't 
know whether that extends to authority control stuff.

Will Martin

Web Services Librarian
Chester Fritz Library
University of North Dakota

On 2014-09-05 16:00, Harper, Cynthia wrote:
 Do any of you have processes for batch authority control - getting 
 MARC authority records into an ILS - that are less-costly (in terms of
 cash) than commercial services like MARCIVE or LTI?   I'm working for
 a cash-strapped organization, have computing skills, and think it a 
 shame that our staff spends so much time on piece-by-piece repetitive 
 work.

 Any suggestions?

 Cindy Harper
 Electronic Services and Serials Librarian Virginia Theological 
 Seminary
 3737 Seminary Road
 Alexandria VA 22304
 703-461-1794
 char...@vts.edu


[CODE4LIB] Open-source batch authority control?

2014-09-05 Thread Harper, Cynthia
Do any of you have processes for batch authority control - getting MARC 
authority records into an ILS - that are less-costly (in terms of cash) than 
commercial services like MARCIVE or LTI?   I'm working for a cash-strapped 
organization, have computing skills, and think it a shame that our staff spends 
so much time on piece-by-piece repetitive work.

Any suggestions?

Cindy Harper
Electronic Services and Serials Librarian
Virginia Theological Seminary
3737 Seminary Road
Alexandria VA 22304
703-461-1794
char...@vts.edu


Re: [CODE4LIB] non-kb based openurl from ILS

2014-07-14 Thread Harper, Cynthia
Sorry if this is too basic, but I'm not sure what you mean by shadowing 
records in Aleph - does that mean you'd just have brief records in Aleph which 
are loaded from an OCLC kb export? But that sounds like your second option, 
adding a linkout...  I just don't know the shadowing terminology.

Cindy Harper
char...@vts.edu

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Jenn 
Riley
Sent: Monday, July 14, 2014 1:54 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] non-kb based openurl from ILS

Hi everyone,

We use Aleph for our back end ILS and WorldCat Local as our discovery layer. 
We'd previously maintained both the OCLC KB and the SFX KB, but under the 
principle of not maintaining data in two places we've retired SFX. We started 
out just shadowing records for e-books and e-journals in Aleph, but are now 
wondering what it would look like to continue to have those records available 
in Aleph and find a way to have them link out. The URLs in these (currently 
shadowed) records go to SFX, and trying to redirect or batch update those looks 
like a no-go based on the difficulty of matching up the SFX and OCLC KB data. 
So we'd have to hide those URLs and find another way to get the user from the 
Aleph search result to the electronic version of that item.

In this scenario we're considering a 'find full text' button in Aleph that 
sends an OpenURL request through to the OCLC link resolver. It's basically 
old-skool OpenURL as before KBs existed. Or how OpenURL works from an AI 
database.

We think it's possible to do this in Aleph piggybacking on how the SFX requests 
were constructed and sent, but would like to talk with others who have done 
this or something similar through Aleph, if there are any. Anyone have any 
experience with this? Or words of advice?

Jenn

---
Jenn Riley
Associate Dean, Digital Initiatives | Vice Doyenne, Initiatives numériques

McGill University Library | Bibliothèque Université McGill
3459 McTavish Street | 3459, rue McTavish Montreal, QC, Canada H3A 0C9 | 
Montréal (QC) Canada  H3A 0C9

(514) 398-3642
jenn.ri...@mcgill.ca


Re: [CODE4LIB] non-kb based openurl from ILS

2014-07-14 Thread Harper, Cynthia
Yes that helps - we call it suppressed.

Your post inspired me to research changing from our old-style III link 
resolver to OCLC KB.  So far, fun with pubget...

Cindy

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Jenn 
Riley
Sent: Monday, July 14, 2014 3:05 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] non-kb based openurl from ILS

Oops, sorry for being oblique! Right now we have records for ebooks and 
ejournals in Aleph completely hidden from users ('shadowed' as we call it 
here), so they're not retrieved in searches. We did this because we knew the 
links in them werent going to work once we shut off SFX. If we had OpenURL 
working in Aleph going through the OCLC KB we'd unshadow these records and make 
them available to users again.

This all assumes the records can get into Aleph in the first place which is a 
separate issue, for which our approach and painful decisions are too 
complicated to get into here. :-)

Does that help?

Jenn




On 2014-07-14 2:50 PM, Harper, Cynthia char...@vts.edu wrote:

Sorry if this is too basic, but I'm not sure what you mean by shadowing
records in Aleph - does that mean you'd just have brief records in 
Aleph which are loaded from an OCLC kb export? But that sounds like 
your second option, adding a linkout...  I just don't know the shadowing
terminology.

Cindy Harper
char...@vts.edu

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
Jenn Riley
Sent: Monday, July 14, 2014 1:54 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] non-kb based openurl from ILS

Hi everyone,

We use Aleph for our back end ILS and WorldCat Local as our discovery 
layer. We'd previously maintained both the OCLC KB and the SFX KB, but 
under the principle of not maintaining data in two places we've retired 
SFX. We started out just shadowing records for e-books and e-journals 
in Aleph, but are now wondering what it would look like to continue to 
have those records available in Aleph and find a way to have them link out.
The URLs in these (currently shadowed) records go to SFX, and trying to 
redirect or batch update those looks like a no-go based on the 
difficulty of matching up the SFX and OCLC KB data. So we'd have to 
hide those URLs and find another way to get the user from the Aleph 
search result to the electronic version of that item.

In this scenario we're considering a 'find full text' button in Aleph 
that sends an OpenURL request through to the OCLC link resolver. It's 
basically old-skool OpenURL as before KBs existed. Or how OpenURL works 
from an AI database.

We think it's possible to do this in Aleph piggybacking on how the SFX 
requests were constructed and sent, but would like to talk with others 
who have done this or something similar through Aleph, if there are any.
Anyone have any experience with this? Or words of advice?

Jenn

---
Jenn Riley
Associate Dean, Digital Initiatives | Vice Doyenne, Initiatives 
numériques

McGill University Library | Bibliothèque Université McGill
3459 McTavish Street | 3459, rue McTavish Montreal, QC, Canada H3A 0C9 
| Montréal (QC) Canada  H3A 0C9

(514) 398-3642
jenn.ri...@mcgill.ca


Re: [CODE4LIB] Barcode scanner

2014-07-07 Thread Harper, Cynthia
It's your choice of a CSV or text file.

At a previous library, we used the III Millennium inventory system. You could 
edit this file with a macro to make it suitable for ingestion into the 
inventory system, and then upload it to III and process it from there.  I don't 
think III is still selling this old text-based inventory system, but it still 
works for the libraries that have it.  So this barcode scanner is not 
compatible with the new III Circa inventory system, AFAIK.  Other systems are 
out of my knowledge-base scope.  I mostly suggested this option thinking Riley 
may be processing the data outside the ILS.  That's what I've done with our 
small-scale periodicals counting project.

Cindy Harper
char...@vts.edu

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
Elizabeth Leonard
Sent: Monday, July 07, 2014 11:24 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Barcode scanner

Cindy-

A couple questions:

The data is dumped into what type of file? Do you have an option?

And then how do you move that data into your ILS? (I know this is ILS dependent 
but I am trying to envision workflow). Do you the use an attached barcode 
reader to scan them into your system? Or do you have a way to import?


Elizabeth Leonard
Seton Hall University
400 South Orange Avenue
South Orange, NJ 07079
973-761-9445



-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Harper, 
Cynthia
Sent: Wednesday, July 2, 2014 8:30 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Barcode scanner

We use one of this family of scanners - Opticon OPN200x - for print periodicals 
use counts. It's standalone or USB,  collects a time-stamped barcode file, and 
you can download when you care to.  The battery seems to last forever before 
needing recharging under my use conditions.  
http://www.opticonusa.com/products/companion-scanners/opn2001.html
 

Cindy Harper
Electronic Services and Serials Librarian Virginia Theological Seminary
3737 Seminary Road
Alexandria VA 22304
703-461-1794
char...@vts.edu 



-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Riley 
Childs
Sent: Tuesday, July 01, 2014 5:37 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Barcode scanner

We use code39 for everything, I am trying to find something that I can give to 
2 volunteers to run inventory twice a year without having to be tied to an ipad

Riley Childs
Student
Asst. Head of IT Services
Charlotte United Christian Academy
(704) 497-2086
RileyChilds.net
Sent from my Windows Phone, please excuse mistakes 

From: Riesner, Giles W.mailto:gries...@ccbcmd.edu
Sent: ‎7/‎1/‎2014 3:51 PM
To: CODE4LIB@LISTSERV.ND.EDUmailto:CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Barcode scanner

Riley,

Basically ANY barcode scanner would work for you. Barcode scanners simply read 
in data as though it was typed in from a keyboard.
What matters is that you have the symbologies  you need enabled. Library 
barcodes tend to be Codabar (which is not always enabled by default), while 
stores often use UPC/EAN (which is usually enabled). And the barcodes for our 
students and staff at the College are in Code 128.  If you can attach the 
barcode reader to a laptop and scan the barcodes into a blank text file, then 
it's enabled.

If you grab a copy of the manual for the barcode reader you can see how to 
program in any prefixes or suffixes you need and more - things like being able 
to tell which symbology is being used.

If all you're doing is scanning in barcode numbers to say that this piece of 
equipment is here, you don't even need a special program, just a text file that 
can be imported into Excel. We do something similar and upload data to our 
library system to update  the inventory of our collection at the various 
Branches.

There are indeed apps for Android and IOS devices that might enable you to use 
a phone to do it too.

Just my .02 worth.

Regards,


Giles W. Riesner, Jr. | Lead Library Technician , Library Technology
The Community College of Baltimore County   | 800 South Rolling Road | 
Catonsville, MD 21228 USA
Phone:  1-443-840-2736 | Fax: 1-410-455-6436 | Email:  gries...@ccbcmd.edu 
CCBC. The incredible value of education.



-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Riley 
Childs
Sent: Monday, June 30, 2014 9:24 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] Barcode scanner

I am trying to find a barcode scanner that i can do inventory with, I was 
looking at the KDC20, but it is a tad out of my price range, what barcode 
scanner do you like? I have a Metroset Voyager (Honeywell branded) that i like, 
but am trying to see what others have and get some better suggestions.

Riley Childs
Student
Asst. Head of IT Services
Charlotte United Christian Academy
(704) 497-2086
RileyChilds.net
Sent from my

Re: [CODE4LIB] Barcode scanner

2014-07-02 Thread Harper, Cynthia
We use one of this family of scanners - Opticon OPN200x - for print periodicals 
use counts. It's standalone or USB,  collects a time-stamped barcode file, and 
you can download when you care to.  The battery seems to last forever before 
needing recharging under my use conditions.  
http://www.opticonusa.com/products/companion-scanners/opn2001.html
 

Cindy Harper
Electronic Services and Serials Librarian
Virginia Theological Seminary
3737 Seminary Road
Alexandria VA 22304
703-461-1794
char...@vts.edu 



-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Riley 
Childs
Sent: Tuesday, July 01, 2014 5:37 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Barcode scanner

We use code39 for everything, I am trying to find something that I can give to 
2 volunteers to run inventory twice a year without having to be tied to an ipad

Riley Childs
Student
Asst. Head of IT Services
Charlotte United Christian Academy
(704) 497-2086
RileyChilds.net
Sent from my Windows Phone, please excuse mistakes 

From: Riesner, Giles W.mailto:gries...@ccbcmd.edu
Sent: ‎7/‎1/‎2014 3:51 PM
To: CODE4LIB@LISTSERV.ND.EDUmailto:CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Barcode scanner

Riley,

Basically ANY barcode scanner would work for you. Barcode scanners simply read 
in data as though it was typed in from a keyboard.
What matters is that you have the symbologies  you need enabled. Library 
barcodes tend to be Codabar (which is not always enabled by default), while 
stores often use UPC/EAN (which is usually enabled). And the barcodes for our 
students and staff at the College are in Code 128.  If you can attach the 
barcode reader to a laptop and scan the barcodes into a blank text file, then 
it's enabled.

If you grab a copy of the manual for the barcode reader you can see how to 
program in any prefixes or suffixes you need and more - things like being able 
to tell which symbology is being used.

If all you're doing is scanning in barcode numbers to say that this piece of 
equipment is here, you don't even need a special program, just a text file that 
can be imported into Excel. We do something similar and upload data to our 
library system to update  the inventory of our collection at the various 
Branches.

There are indeed apps for Android and IOS devices that might enable you to use 
a phone to do it too.

Just my .02 worth.

Regards,


Giles W. Riesner, Jr. | Lead Library Technician , Library Technology
The Community College of Baltimore County   | 800 South Rolling Road | 
Catonsville, MD 21228 USA
Phone:  1-443-840-2736 | Fax: 1-410-455-6436 | Email:  gries...@ccbcmd.edu 
CCBC. The incredible value of education.



-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Riley 
Childs
Sent: Monday, June 30, 2014 9:24 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] Barcode scanner

I am trying to find a barcode scanner that i can do inventory with, I was 
looking at the KDC20, but it is a tad out of my price range, what barcode 
scanner do you like? I have a Metroset Voyager (Honeywell branded) that i like, 
but am trying to see what others have and get some better suggestions.

Riley Childs
Student
Asst. Head of IT Services
Charlotte United Christian Academy
(704) 497-2086
RileyChilds.net
Sent from my Windows Phone, please excuse mistakes


[CODE4LIB] Adding reference product hits to discovery layer results

2014-06-17 Thread Harper, Cynthia
Hi All - We are a very small institution with a limited number of users and a 
limited number of electronic products. We do subscribe to several Oxford 
Reference Online products, e.g biographical/subject dictionaries.  Has anyone 
tried metasearching such products along with their discovery layer, or obtained 
indexes that could be added to a discovery layer?  To be honest, we haven't yet 
developed subject guides for these areas, so that would be the first step in 
marketing, but I wondered about the DL approach.


Cindy Harper
Electronic Services and Serials Librarian
Virginia Theological Seminary
3737 Seminary Road
Alexandria VA 22304
703-461-1794
char...@vts.edu


[CODE4LIB] VuFind 2.1 + Pazpar2

2014-04-01 Thread Harper, Cynthia
Is there a cookbook document for setting up VuFind with no Solr database, but 
just metasearch through Pazpar2? I've got my first Pazpar2 instance set up, now 
I need to tell VuFind to bypass the Solr database.  And if you can point me to 
documentation that tells me how to debug VuFind (where are the logs and error 
details?), that would be nice too.  Sorry if I'm being too lazy, but I thought 
I'd ask.

Thanks,
Cindy Harper
Electronic Services and Serials Librarian
Virginia Theological Seminary
3737 Seminary Road
Alexandria VA 22304
703-461-1794
char...@vts.edu


Re: [CODE4LIB] VuFind 2.1 + Pazpar2

2014-04-01 Thread Harper, Cynthia
Sorry - I just learned there's a VuFind listserv - that's where this belongs

From: Harper, Cynthia
Sent: Tuesday, April 01, 2014 8:48 AM
To: 'Code for Libraries'
Subject: VuFind 2.1 + Pazpar2

Is there a cookbook document for setting up VuFind with no Solr database, but 
just metasearch through Pazpar2? I've got my first Pazpar2 instance set up, now 
I need to tell VuFind to bypass the Solr database.  And if you can point me to 
documentation that tells me how to debug VuFind (where are the logs and error 
details?), that would be nice too.  Sorry if I'm being too lazy, but I thought 
I'd ask.

Thanks,
Cindy Harper
Electronic Services and Serials Librarian
Virginia Theological Seminary
3737 Seminary Road
Alexandria VA 22304
703-461-1794
char...@vts.edumailto:char...@vts.edu


Re: [CODE4LIB] Book scanner suggestions redux

2014-03-07 Thread Harper, Cynthia
I'm curious - how does the shooting time per page compare to something like a 
Minolta PS7000? We've got an old PS7000, buit my experience with the one I've 
used before was that it took sooo long to shoot each page.  Also, the PS7000 
model didn't accommodate a bound volume that wouldn't open flat all that well.  
Would this be an improvement over that?

Cindy Harper
Virginia Theological Seminary
char...@vts.edu

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of 
raffaele messuti
Sent: Friday, March 07, 2014 6:50 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Book scanner suggestions redux

Chris Fitzpatrick wrote:
 I've used one of the DIY Bookscanners kits. Worked great and I didn't 
 have to go into the dumpster.  They did a good job on the components 
 and assembly was rather easy.
 
 However, it is all very much a manual process. An operator has to work 
 the machine to scan all the pages.

i confirm, there is a lot of diy (do it yourself).

but this software aims to facilitate the workflow:
http://spreads.readthedocs.org/en/latest/
https://github.com/DIYBookScanner/spreads

it's a python program to remote the two cameras, shoot simultaneously, download 
the image file, rename, rotate or apply whatever filter, and finally prepare 
the scantailor project, or just package the book in pdf or djvu.
there is also an experimental web interface that lets to control everything 
with the browser.
i'm using spreads on a raspberrypi (manually installed on a raspbian), but 
there is also a tailored image that you can build 
https://github.com/DIYBookScanner/spreadpi

bye


--
raffaele, @atomotic


[CODE4LIB] Broadening a geographic search in discovery layers

2014-02-12 Thread Harper, Cynthia
Is anyone doing any work that would make it possible to broaden a geographic 
search in a discovery layer?  For instance, broadening specific country 
subheadings into a continent search?  Just wondering.

Cindy Harper
Electronic Services and Serials Librarian
Virginia Theological Seminary
3737 Seminary Road
Alexandria VA 22304
703-461-1794
char...@vts.edu


[CODE4LIB] LibX 2.0 Libapps

2014-02-08 Thread Harper, Cynthia
What's the current state of LibX 2.0 and Libapps sharing?  Why am I having 
trouble finding that out?

Cindy Harper
cindyharper1...@gmail.com


[CODE4LIB] FW: [CODE4LIB] Interested in enhancing call-number browse in Millennium catalog

2013-12-04 Thread Harper, Cynthia
I didn't think of it as putting a meaning on every row, just a header where a 
meaning starts to act as guides as the user is scanning the call number range. 
I think you're right = I should just harvest the most important call number 
ranges and insert these. And since I'm in a theological library, I can just 
concentrate on the B's to keep it manageable.  I'll give it a try during the 
holiday.

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Karen 
Coyle
Sent: Wednesday, December 04, 2013 2:53 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Interested in enhancing call-number browse in 
Millennium catalog

On 12/4/13 11:10 AM, Kyle Banerjee wrote:

 But this is horribly hacky and a lot of work for relatively little gain.
 When you get right down to it, the only purpose of a call number is to 
 physically collocate materials on the shelf and it's not really that 
 useful for search which is why practically no one aside from a few 
 cataloging nerds do call number searches. Plus, anyone geeky enough to 
 do a call number search actually must know what call number range is 
 relevant to their needs. Keep in mind that in many cases, no decent 
 call number exists for a concept and the best one available really is 
 quite crummy so prominently displaying that won't necessarily be a good thing.
Well, I have to disagree with some of this. Although the call number 
is the shelf location, it has topical meaning that, while undoubtedly not 
perfect, is supposed to collocate items based on their subject matter. With LCC 
the big problem is that you can walk down an aisle with similar-looking numbers 
and have passed into a very different subject area. You can sometimes figure 
this out by book titles, but unlike STEM journal article titles, book titles 
can be more catchy than informative. Don't think of an elephant! is a book 
on progressive political rhetoric. (Thanks, a lot, Lakoff) The anarchist in 
the library is a book about information distribution and the social order. 
(Thanks, Siva V.)

I think we have done users a disservice for decades expecting them to somehow 
magically guess what they are looking at on the shelf. 
Collocation only takes you so far, because at some juncture, two books beside 
each other on the shelf are going to *have* to be about different topics even 
though their class numbers sort beside each other. I'm all for serendipity, but 
some information seeking needs an informed user.

kc




 kyle


 On Tue, Dec 3, 2013 at 6:35 PM, Harper, Cynthia char...@vts.edu wrote:

 I'm thinking of trying to enhance the call-number browse pages on a 
 Millennium catalog with meanings of the classification ranges taken 
 from the LoC Classification database.

 http://id.loc.gov/authorities/classification.html

 a typical call-number browse page might look like this:

 http://librarycatalog.vts.edu/search~S1?/cBX100.7.B632+1999/cbx++100.
 7+b632+1999/-3,-1,,E/browse
 
 http://librarycatalog.vts.edu/search%7ES1?/cBX100.7.B632+1999/cbx++10
 0.7+b632+1999/-3,-1,,E/browse I'd like to intersperse the call-number 
 listing with call-number range meanings like

 BX100 - Christian denominations - Eastern churches

 Has anyone tried this?  Can you point me to the API documentation for 
 the LC Classification?

 Cindy Harper


--
Karen Coyle
kco...@kcoyle.net http://kcoyle.net
m: 1-510-435-8234
skype: kcoylenet


[CODE4LIB] Interested in enhancing call-number browse in Millennium catalog

2013-12-03 Thread Harper, Cynthia
I'm thinking of trying to enhance the call-number browse pages on a
Millennium catalog with meanings of the classification ranges taken from
the LoC Classification database.

http://id.loc.gov/authorities/classification.html

a typical call-number browse page might look like this:
http://librarycatalog.vts.edu/search~S1?/cBX100.7.B632+1999/cbx++100.7+b632+1999/-3,-1,,E/browsehttp://librarycatalog.vts.edu/search%7ES1?/cBX100.7.B632+1999/cbx++100.7+b632+1999/-3,-1,,E/browse

I'd like to intersperse the call-number listing with call-number range
meanings like

BX100 - Christian denominations - Eastern churches

Has anyone tried this?  Can you point me to the API documentation for the
LC Classification?

Cindy Harper


Re: [CODE4LIB] What can be done to stop deleting of records belonging to users of our Minuteman Library Network in Massachusetts?

2013-10-15 Thread Harper, Cynthia
Perhaps if you exported data from lists that re likely to have items/bibs 
deleted after you have collected them, you could keep an archive of data.

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of don 
warner saklad
Sent: Friday, October 11, 2013 10:23 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] What can be done to stop deleting of records belonging 
to users of our Minuteman Library Network in Massachusetts?

a) Forensics studies deal with how to retrieve deleted unarchived
data. So called deleted data is actually available.

b) Setup the system not to delete records belonging to users. Let users keep 
their information saved for followup. Or at the very least notify users 
beforehand.

Example. Title wrongfully deleted. Title replaced by 8 places Record number, 
lower case letter with 7 numbers.

| You are logged in
https://library.minlib.net/patroninfo~S1/1196412/mylists

My Lists  organize130808 ( 29 )
https://library.minlib.net/patroninfo~S1/1196412/mylists?listNum=31980

Mark   Title   Author   Date Added
[_] Record b2491348 is not available 03-12-2013 [_] Record b2522793 is not 
available 03-12-2013

[_] Record b2926646 is not available 10-29-2011 [_] Record b2948837 is not 
available 10-25-2011


Re: [CODE4LIB] VuFind 2.1 Released

2013-08-26 Thread Harper, Cynthia
Does this mean that VuFind can be configured to have no pre-indexed content, 
but be used for federated search of PazPar2 sites only?  Our catalog is very 
small, the number of databases we subscribe to is very small, and I have 
thought that makes us a candidate for federated search, since we don't have a 
commercial discovery layer.  I'd also be interested in adding indexing of 
websites.  But committing to an ongoing pre-indexing process of ongoing changes 
to the catalog has kept me from trying VuFind.

Cindy Harper
Virginia Theological Seminary

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Demian 
Katz
Sent: Monday, August 26, 2013 11:17 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] VuFind 2.1 Released

Apologies for cross-posting

FOR IMMEDIATE RELEASE



VuFind 2.1 Released



Villanova, Pennsylvania - August 26, 2013 - Version 2.1 of the VuFind Open 
Source discovery software has just been released. This release further improves 
the stability and flexibility of VuFind 2.0, adding significant new features in 
the process.



Some key additions:



- Flexible and configurable support for combining search results from multiple 
sources, inspired by Villanova University's multi-column Catalog/Summon search.

- A new module for indexing websites.

- Integration with new software and services, including Booksite enhanced 
content, the Polaris ILS, and the Pazpar2 open source metasearch tool.



Additionally, several bug fixes and minor improvements have been incorporated.



Questions about the new release or VuFind in general can be directed to Demian 
Katz, the lead developer of the project at Villanova University. The software 
and its documentation may be found at http://vufind.org.



Contact:
Demian Katz
demian.k...@villanova.edumailto:demian.k...@villanova.edu
Villanova University
Falvey Memorial Library
800 Lancaster Avenue
Villanova, PA 19085



###


Re: [CODE4LIB] De-dup MARC Ebook records

2013-08-15 Thread Harper, Cynthia
Michael -  I'm just about to load ebook records into our Innovative catalog, 
and I'm going to keep the e-books separate from the print book records.  For 
ebooks, I'm going to copy the OCLC number to the 901 with a prestamp, and 
overlay on that. So only records loaded with our ebook load table will have 
this 901 to overlay on.  Then I'm going to protect the 856s and the 710s for 
the ebook collection statement.  That'll take care of adds.  For deletes... I 
haven't got that worked out yet.  I think there's a way to delete a field based 
on the incoming field.

Cindy Harper
Virginia Theological Seminary
char...@vts.edu

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Andy 
Kohler
Sent: Thursday, August 15, 2013 2:29 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] De-dup MARC Ebook records

Are you expecting to work with two files of records, outside of your ILS?
If so, for a project like that I'd probably write Perl script(s) using 
MARC::Record (there are similar code libraries for Ruby, Python and Java at 
least).

For each record in each file, use the ISBN (and/or OCLC number and/or LCCN) as 
a key.  Compare all sets, and keep one record per key.

This assumes that the vendors are supplying records with standard identifiers, 
and not just their own record numbers.

If you're comparing each file with what's already in your ILS, then it'll 
depend on the tools the ILS offers for matching incoming records to the 
database.  Or, export the database and compare it with the files, as above.

Andy Kohler / UCLA Library Info Tech
akoh...@library.ucla.edu / 310 206-8312

On Thu, Aug 15, 2013 at 10:11 AM, Michael Beccaria mbecca...@paulsmiths.edu
 wrote:

 Has anyone had any luck finding a good way to de-duplicate MARC 
 records from ebook vendors. We're looking to integrate Ebrary and 
 Ebsco Academic Ebook collections and they estimate an overlap into the 10's 
 of thousands.




[CODE4LIB] Regular expression for maximum 4-digit number

2013-07-02 Thread Harper, Cynthia
Is there a way to return (in Excel, if possible) the largest 4-digit number (by 
word boundaries) in a string?  I've extracted the 863 fields from Millennium 
for my active periodicals, and want to find the latest year in each run.  I'm 
willing to estimate it by taking the largest 4-digit number in the string. I'm 
doing this in Excel.  Any help?

Cindy Harper
Electronic Services and Serials Librarian
Virginia Theological Seminary
3737 Seminary Road
Alexandria VA 22304
703-461-1794
char...@vts.edu


Re: [CODE4LIB] Regular expression for maximum 4-digit number

2013-07-02 Thread Harper, Cynthia
I've used a couple of Add-ins for regexp in excel, but I wondered if regexp had 
the ability to test the multiple matches in a single-line expression.  But I 
guess that does require a multiline program - I'll use VB. Thanks.

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Kyle 
Banerjee
Sent: Tuesday, July 02, 2013 11:47 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] Regular expression for maximum 4-digit number

AFAIK, Excel has no built in regex capabilities so you'd need to call vbscript 
from Excel to do this.

In any case, you'll need to write an actual program to evaluate each line since 
multiple values can occur in the same line. This will be easier if done as text 
than in VBA. Besides, the data in Excel came from Mil in text to begin with.

There are many ways to do what you want, but perl would be hard to beat for 
this use case

Kyle
On Jul 2, 2013 10:02 AM, Harper, Cynthia char...@vts.edu wrote:

 Is there a way to return (in Excel, if possible) the largest 4-digit 
 number (by word boundaries) in a string?  I've extracted the 863 
 fields from Millennium for my active periodicals, and want to find the 
 latest year in each run.  I'm willing to estimate it by taking the 
 largest 4-digit number in the string. I'm doing this in Excel.  Any help?

 Cindy Harper
 Electronic Services and Serials Librarian Virginia Theological 
 Seminary
 3737 Seminary Road
 Alexandria VA 22304
 703-461-1794
 char...@vts.edu



[CODE4LIB] ruby zoom and Yaz

2013-06-26 Thread Harper, Cynthia
Hi all -
I'm trying to bring up a test instance of libraryfind on Amazon EC2.  I
have installed Yaz 4.2.61, and I included the --enable-shared in my
./configure statement.  I'm using Ruby 1.8.7. When I try to sudo gem
install zoom I get the message that 'yaz is apparently not installed' -
and I see that '/usr/local/bin/yaz-config' is returning false.  I'm a
newbie to ruby, and don't know where to go from here.

Anyone have any suggestions?

Cindy Harper


Re: [CODE4LIB] ruby zoom and Yaz

2013-06-26 Thread Harper, Cynthia
The instructions for the Zoom gem say that Yaz must be installed with 
enable-shared, and that the package defaults to static, so I concluded I had to 
install from source.  
And when I try to install the zoom gem from a package, I can't find it. Maybe I 
need to look a little harder?

From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Cary Gordon 
[listu...@chillco.com]
Sent: Wednesday, June 26, 2013 2:34 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] ruby zoom and Yaz

Why are you installing YAZ from source rather from a package?

What does '/usr/local/bin/yaz-config' is returning false mean?


Re: [CODE4LIB] ruby zoom and Yaz

2013-06-26 Thread Harper, Cynthia
yes, I ran sudo make install - I could combine the two into one step, right?  
I'll check for erros again, but yaz-client runs fine on the machine. and 
yaz-config returns return-code 0, which I interpret as OK. But the zoom ruby 
code seems to interpret the yaz-config return-code of 0=false as meaning yaz IS 
NOT installed.

From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Terrell, Trey 
[trey.terr...@oregonstate.edu]
Sent: Wednesday, June 26, 2013 7:06 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] ruby zoom and Yaz

It was my experience that was the case. Did you run make and sudo make install 
on the Yaz source after configuring it?

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf O t
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] ruby zoom and Yaz

The instructions for the Zoom gem say that Yaz must be installed with 
enable-shared, and that the package defaults to static, so I concluded I had to 
install from source.
And when I try to install the zoom gem from a package, I can't find it. Maybe I 
need to look a little harder?

From: Code for Libraries [CODE4LIB@LISTSERV.ND.EDU] on behalf of Cary Gordon 
[listu...@chillco.com]
Sent: Wednesday, June 26, 2013 2:34 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] ruby zoom and Yaz

Why are you installing YAZ from source rather from a package?

What does '/usr/local/bin/yaz-config' is returning false mean?


Re: [CODE4LIB] phone app for barcode-to-textfile?

2013-06-06 Thread Harper, Cynthia
But I don't see that it'll do Codabar or Code39.

-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Ken 
Irwin
Sent: Thursday, June 06, 2013 2:47 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] phone app for barcode-to-textfile?

This (CLZ Barry) looks like it could be perfect! $8/phone beats many other 
options!

Ken


-Original Message-
From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of Aaron 
Addison
Sent: Thursday, June 06, 2013 2:07 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] phone app for barcode-to-textfile?

You might want to look at

http://www.clz.com/barry/


-- 
Aaron Addison
Unix Administrator 
W. E. B. Du Bois Library UMass Amherst
413 577 2104



On Thu, 2013-06-06 at 17:40 +, Ken Irwin wrote:
 Hi all,
 
 Does anyone have a phone app (pref. iOS) that will just scan barcodes to a 
 textfile? All the apps I'm finding are shopping oriented or other special 
 uses. I just want to replace our antique barcode scanner that spits out a 
 list of barcodes as a text file.
 
 Anyone have such a thing? Or advice on where to assemble the building blocks 
 to create one?
 
 Thanks
 Ken