Re: [CODE4LIB] Fine collection online
> We would like to allow our patrons to pay their fines online. I am > interested in hearing the solutions folks have for this. We recently (6 months ago) implemented this. Our university's central IT Services implemented a new SOA payment system so we built something to connect with that and our ILS (Voyager). On the ILS side we use the RESTful API for getting patron details and the SIP2 (self check) interface for recording payments. Basically I think you'll probably have to write it yourself and it'll be pretty specific to the two systems you're dealing with (unless somebody has already done this with Millennium and TouchNet). We had the advantage of adapting part from a My Account page we'd written using the ILS APIs, and borrowing the SIP2 code from another library. Good luck! David
Re: [CODE4LIB] separate list for jobs
>> elm++ > elm didn't have good MIME support I have to agree. I have scoured YouTube and found no videos of Eric Lease Morgan silently trapped in a glass box... :-)
Re: [CODE4LIB] separate list for jobs
> This is a pretty terrible reply. I thought it was a great reply. > obscure words (seriously, shibboleth?) Somewhat obscure, but not so much in Code4Lib. http://en.wikipedia.org/wiki/Shibboleth http://en.wikipedia.org/wiki/Shibboleth_(Internet2) > Unless you're trying to be sarcastic...in which case ignore this. He most definitely was. I believe Stuart's point was to suggest that when the multiple requests for a separate list for job notices get immediately shot down with "no - use an email filter, or are you stupid?" [1] it doesn't help to create an "inclusive" and "good learning environment". [1] NB the respondents aren't explicitly "are you stupid" but that's how it may be taken by some people. > And to answer the original question - job listings help more people than they > annoy so they should be kept as-is. My view is that it would make more sense to have separate discussion and job notice lists, as I see in other places. But I'm not that bothered personally, as I would subscribe to both and filter them into the same folder in my mail client. :-) Cheers David
Re: [CODE4LIB] barriers to open metadata?
Hi Laura > I'd like to find out from as many people as are interested what barriers > you feel exist right now to you releasing your library's bibliographic > metadata openly. One issue is that we pay for enrichments (tables of contents etc) for records, and I believe the licence restricts us from giving them to other people. We send our records to the national union catalogue and OCLC before adding the enrichments, and we'd need to take them out before we could "release" records elsewhere. Cheers David
Re: [CODE4LIB] EZProxy changes / alternatives ?
The subscription fee for Australia and New Zealand is AU$600 (excluding GST) per year. They say: "Our 2014 releases will concentrate on IPV6 and reporting capabilities." I've just discovered that we're currently running 5.1c, which was released in 2009. So perhaps we'll be able to survive on 5.7 for a while? :-) David On 30 January 2014 03:14, Ingraham Dwyer, Andy wrote: > OCLC announced in April 2013 the changes in their license model for North > America. EZProxy's license moves from requiring a one-time purchase of > US$495 to a *annual* fee of $495, or through their hosted service, with the > fee depending on scale of service. The old one-time purchase license is no > longer offered for sale as of July 1, 2013. I don't have any details about > pricing for other parts of the world. > > An important thing to recognize here, is that they cannot legally change the > terms of a license that is already in effect. The software you have > purchased under the old license is still yours to use, indefinitely. OCLC > has even released several maintenance updates during 2013 that are available > to current license-holders. In fact, they released V5.7 in early January > 2014, and made that available to all license-holders. However, all updates > after that version are only available to holders of the yearly subscription. > The hosted product is updated to the most current version automatically. > > My recommendation is: If your installation of EZProxy works, don't change > it. Yet. Upgrade your installation to the last version available under the > old license, and use that for as long as you can. At this point, there are > no world-changing new features that have been added to the product. There is > speculation that IPv6 support will be the next big feature-add, but I haven't > heard anything official. Start planning and budgeting for a change, either > to the yearly fee, or the cost of hosted, or to some as-yet-undetermined > alternative. But I see no need to start paying now for updates you don't > need. > > -Andy > > > > Andy Ingraham Dwyer > Infrastructure Specialist > State Library of Ohio > 274 E. 1st Avenue > Columbus, OH 43201 > library.ohio.gov > > > -Original Message- > From: Code for Libraries [mailto:CODE4LIB@LISTSERV.ND.EDU] On Behalf Of > stuart yeates > Sent: Tuesday, January 28, 2014 10:03 PM > To: CODE4LIB@LISTSERV.ND.EDU > Subject: Re: [CODE4LIB] EZProxy changes / alternatives ? > > I probably should have been more specific. > > Does anyone have experience switching from EzProxy to anything else? > > Is anyone else aware of the coming OCLC changes and considering switching? > > Does anyone have a worked example like: "My EzProxy config for site Y looked > like A; after the switch, my X config for site Z looked like B"? > > I'm aware of this good article: > http://journal.code4lib.org/articles/7470 > > cheers > stuart > > > On 29/01/14 15:24, stuart yeates wrote: >> We've just received notification of forth-coming changes to EZProxy, >> which will require us to pay an arm and a leg for future versions to >> install locally and/or host with OCLC AU with a ~ 10,000km round trip. >> >> What are the alternatives? >> >> cheers >> stuart > > > -- > Stuart Yeates > Library Technology Services http://www.victoria.ac.nz/library/
Re: [CODE4LIB] Academic Library Website Question
> Your library stats should tell the tale of how folks are getting there. FWIW our Google Analytics stats indicate search being the primary vehicle: 45.9% Google 12.0% LMS (Moodle) 6.6% university management school subsite 4.3% OPAC 3.9% university main site 3.6% university education school subsite 2.6% university single signon 2.3% discovery layer (Summon) 2.0% Facebook I hadn't looked before (the website's not under my jurisdiction) and it's different to what I had expected... David
Re: [CODE4LIB] Canberra event -- Ed Summers at NLA, 2 December
> not sure if i should be jealous of nla for getting ed to speak, or of ed > for getting to go to australia. Not as jealous as you should be of Ed for getting to come to New Zealand. ;-) > regardless, will there be a recording of said event? > > And thanks to New Zealand's awesome National Digital Forum for bringing Ed > > to our part of the world. As mentioned, Ed is keynoting NDF: http://www.ndf.org.nz/keynote-speakers-ndf2013/ and based on previous years, his talk and others will be available afterwards: http://www.ndf.org.nz/past-conferences/ David
Re: [CODE4LIB] pdf2txt
> For a limited period of time I am making publicly available a Web-based > program called PDF2TXT -- http://bit.ly/1bJRyh8 Looks very good, and thanks for sharing it. (It's certainly not the first piece of software called pdf2txt, but that probably doesn't matter.) > PDF2TXT extracts the text from an OCRed PDF document The file I tried was digital native (probably from Word) so perhaps outside your intended scope. The text output was fairly similar to that from pdftotext (in Ubuntu poppler-utils package), perhaps better in losing the arbitrary line breaks, but fell over on macrons. There were a lot of Māori words and the vowels with macrons disappeared - e.g. Pākehā => Pkeh. I assume Unicode issues were also at the heart of %3Cunknown%3E being one of the "most frequent verbs". The link for this [1] gives a regex error. Cheers David [1] http://dh.crc.nd.edu/sandbox/pdf2txt/pdf2txt.cgi?cmd=verbs&id=1381700598&lemma=%3Cunknown%3E
Re: [CODE4LIB] GitHub Myths (was thanks and poetry)
> If you're not willing to provide even your name to make use of a free > service, then I dare say you are erecting your own barriers. Such is your > choice, of course, but I don't think others need to be compelled > to accommodate the barriers you create for yourself. > > And just because the terms of use are not unconditional, or perfectly to > your liking, does not mean you're not welcome to use it. You are. "To all the people complaining about the Code4Lib 2014 conference being unwelcoming because of our new No Clothes Policy, I say you are wrong. We are entitled to enact our own conditions of entry, and if you are unwilling to front up naked then you are just erecting your own barriers. The conference is open and welcome to all - I hope to see you there." :-p A different post mentioned namespace collisions - I actually don't suffer from this, and because of my unique name I sometimes prefer not to hand it over in certain circumstances (but GitHub wouldn't worry me). David
Re: [CODE4LIB] On-the-fly Closed Captioning
> It seems there's also an 'OpenSubtitles' player which isn't > resitricted to educational institutions, but as it's all torrent > files and looks like many other torrent trackers, I'm afraid to > download them (for fear it's got the video included). Subtitle files are small - the text plus cues for when to display - so the file size (i.e. by the order of magnitude) should be a good indication of whether it contains video or not. David
Re: [CODE4LIB] usability testing software
> I may have an opportunity to put together a little bit of a usability > testing lab at my library... Have you seen GVSU's approach? http://matthew.reidsrow.com/articles/12 http://matthew.reidsrow.com/articles/13 David
Re: [CODE4LIB] Q: "Discovery" products and authentication (esp Summon)
> But we do encourage (promote) an interface that forces > off-campus authentication to our Summon instance. With an explanation that it's because of pirates! :-) https://auth.lib.unc.edu/ezproxy_auth.php?url=http://unc.summon.serialssolutions.com/search?s.q= > And one we would need to revisit if we looked to Summon (or some other > product) as a catalog+periodical literature hybrid. Right now we have > separate discovery layers That's a useful factor to add to the discussion, thanks. We use Summon for "everything", but I can imagine that the discussion about requiring authentication could go differently if it was only used for articles. I was pretty sure that I remembered seeing some research showing that authentication was a barrier, i.e. a certain proportion of users don't actually bother to continue. I did some brief-ish searches (taking the opportunity to compare Summon, EDS, Primo and Google), but the best I could come up with was this (an evidence-free statement): http://www.codinghorror.com/blog/2007/06/removing-the-login-barrier.html (I also found a medical paper [1] with a small study that appeared to agree.) David [1] Tjora, Tran & Faxvaag (2005). "Privacy vs Usability: A qualitative exploration of patients' experiences with secure internet communication with their general practitioner", Journal of Medical Internet Research 7:2.
Re: [CODE4LIB] Q: "Discovery" products and authentication (esp Summon)
>> a) most queries come from on-campus > Really? Are people just assuming this, or do they actually have data? That > would surprise me for most contemporary american places of higher education. For the last two months, 25.4% of our Summon traffic has come from the IP addresses we've given as "on campus", according to the stats Serials Solutions provides. Note that another 11.8% came from the local ISP that provides wireless for our students, so most of that would be "on campus" at other institutions. > But it may very well be the extra "restricted" content is not important and > nobody minds it's absence. (Which would make one wonder why the vendor > bothers to spend resources putting it in there!). That's been our view (though you're making me think we should perhaps try and understand better what the difference is). The A&I results are interesting. EDS seems to promote results from their own A&I databases more highly than I would expect, and they're certainly noticeable when blanked out with "cannot be displayed to guests". When Summon started showing A&I results there was some interesting discussion on the mailing list - they're not immediately accessible, so they're arguably not "in the library's collection". And Summon (as does Primo) has an option to "add results beyond your library's collection". There was some argument on the other side, that A&I results are important to be included, so it seems that there is librarian pressure as well as commercial/licence pressure. David
Re: [CODE4LIB] visualize website
> We're doing a survey of our web content and I'm looking for visualization > tools. The content is on a redhat box served up by apache. > Anyone recommend a tool or set of tools they like? "Baobab" or "Disk Usage Analyzer", which is included in gnome-util, is fairly good. It offers ring chart and tree map views. K4DirStat does something similar for KDE. If you don't have a GUI on the server, it looks like philesight will provide a ring chart view through the web server (haven't tried it myself). http://zevv.nl/play/code/philesight/ Cheers David
Re: [CODE4LIB] Learning Microsoft SQL
> I generally find the w3schools stuff a pretty good starting point to help > wrap my head around something I don't know: I've used w3schools a fair bit in the past too, as they rank pretty highly in Google, but have recently been made aware of advice that they aren't necessarily to be relied on, e.g. http://w3fools.com/ David
Re: [CODE4LIB] Local catalog records and Google, Bing, Yahoo!
> I tend to agree with Jonathan Rochkind that having every library's bib > record turn up as a Google snippet would be unwelcome. Better to > mediate the access to local library copies with something more > generic. So when someone searches for a book in Google they should see every online bookstore's page for the book, every social reading page for the book (LibraryThing, Goodreads etc), every review of the book on the blog, ... and buried somewhere a single "library" link (i.e. WorldCat)? Where they have to manually enter their location to see local listings. [1] If you were Joe Average Reader searching for a book in Google, would it be more welcome that you could stumble onto a website that showed you all the local library holdings ("stumble" because who knows that that's what worldcat.org is?), or that the holding records for the local libraries all showed up in the top 10 or 20 results because Google knows your location so makes them extra relevant? Perhaps we could even have both - if most libraries' OPACs were "online" and each record page linked to the appropriate WorldCat record page then WorldCat would have a much larger pagerank than it does now. David [1] Maybe it works better in the US, but even though it can tell I'm physically at the University of Waikato Library and offers me links to our link resolver and catalogue, it can't tell where I am and I have to type in "New Zealand" to see what other libraries hold the item.
Re: [CODE4LIB] Local catalog records and Google, Bing, Yahoo!
>>> why local library catalog records do not show up in search results? Basically, most OPACs are crap. :-) There are still some that that don't provide persistent links to record pages, and most are designed so that the user has a "session" and gets kicked out after 10 minutes or so. These issues were part of Tim Spalding's message that as well as joining web 2.0, libraries also need to join web 1.0. http://vimeo.com/user2734401 >> We don't allow crawlers because it has caused serious performance issues in >> the past. Specifically (in our case at least), each request creates a new session on the server which doesn't time out for about 10 minutes, thus a crawler would fill up the system's RAM pretty quickly. > You can use Crawl-delay: > http://en.wikipedia.org/wiki/Robots_exclusion_standard#Crawl-delay_directive > > You can set Google's crawl rate in Webmaster Tools as well. I've had this suggested before and thought about it, but never had it high up enough in my list to test it out. Has anyone actually used the above to get a similar OPAC crawled successfully and not brought down on its knees? David
Re: [CODE4LIB] Obvious answer to registration limitations
(This discussion happened a couple of weeks ago during the summer break here, but I figured it was still worth adding my couple of cents.) > > so, from Monday to Thursday, each day at noon > > Eastern, 50 registration slots open. > > I think this is a fantastic idea -- especially if you shift around the > timeslot so that it is beneficial to people in different time zones Shifting times would be good. The registration opened at 5am here, though I probably would have gotten up for it had I known it was going to go so quickly. (Did you have to pay when you registered? If so, I don't think I could have convinced the holder of an institutional credit card to get up with me though.) I'll also +1 the suggestion for limiting attendees per organisation if the overall number is going to be kept small. David -- oʇɐʞıɐʍ ɟo ʎʇısɹǝʌıun uɐıɹɐɹqıן sɯǝʇsʎs
Re: [CODE4LIB] Examples of visual searching or browsing
> Clicking on one of Ben Shneiderman's treemapping projects reminded me that > I've always thought treemaps [1] would serve well as a browsing interface > for library and archive collections because they work well with hierarchical > data. I played around with this earlier in the year, wanting to provide a drill-down into our collections by call number. For our Education Library's Teaching Collection, I used a three-level visualisation of items based on Dewey hierarchy, and coloured by the proportion of "new" (post 2006) items. I never put it online anywhere, so have attached it here. Dewey was pretty easy to get labels for the first three levels, and that seemed reasonable enough for most areas. But the majority of our items are LCC, and that's where I ran aground. The labels for the first two letters are readily available, but far too general to make this interesting. I couldn't seem to find any useful data in machine readable format. Sourcing another level down from LoC [1] or Wikipedia [2] seems tantalisingly close, but there's a whole lot of manual effort in turning these (incomplete) ranges into something usable. Cheers David [1] http://www.loc.gov/catdir/cpso/lcco/ [2] http://en.wikipedia.org/wiki/Library_of_Congress_Classification -- oʇɐʞıɐʍ ɟo ʎʇısɹǝʌıun uɐıɹɐɹqıן sɯǝʇsʎs
Re: [CODE4LIB] Seth Godin on The future of the library
>> Some ebooks, in fact some of the greatest ever written, already cost less >> than razor blades. > Do you mean ones not under copyright? Those, plus Creative Commons etc.
Re: [CODE4LIB] geo-locating email domains
> > For a good time I geo-located the email domains of Code4Lib subscribers, > plotted them on a Google map Eric, that is pretty awesome! :-) Disappointed not to show up in there though. The are 6 subscribers with a New Zealand domain, but no mark on the map. (In comparison NGC4Lib has 17 and has a mark over Wellington.) On the bright side, the last time I saw a map of New Zealand covered in those circles was a visualisation of the hundreds of aftershocks in Christchurch, so a little visual peace isn't necessarily a bad thing. :-) Cheers David