Re: [CODE4LIB] SerSol 360Link API?

2010-04-18 Thread David Pattern
Hiya

We're using it to add e-holdings into to our OPAC, e.g. 
http://library.hud.ac.uk/catlink/bib/396817/

I've also tried using the API to add the coverage info to the "availability" 
text for journals in Summon (e.g. "Availability: print (1998-2005) & electronic 
(2000-present)").

I've made quite a few tweaks to our 360 Link (mostly using jQuery), so I'm half 
tempted to have a go using the API to develop a complete replacement for 360 
Link.  If anyone's already done that, I'd be keen to hear more.

regards
Dave Pattern
University of Huddersfield


From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Jonathan 
Rochkind [rochk...@jhu.edu]
Sent: 19 April 2010 03:50
To: CODE4LIB@LISTSERV.ND.EDU
Subject: [CODE4LIB] SerSol 360Link API?

Is anyone using the SerSol 360Link API in a real-world production or 
near-production application?  If so, I'm curious what you are using it for, 
what your experiences have been, and in particular if you have information on 
typical response times of their web API.  You could reply on list or off list 
just to me. If I get interesting information especially from several sources, 
I'll try to summarize on list and/or blog either way.

Jonathan


---
This transmission is confidential and may be legally privileged. If you receive 
it in error, please notify us immediately by e-mail and remove it from your 
system. If the content of this e-mail does not relate to the business of the 
University of Huddersfield, then we do not endorse it and will accept no 
liability.


Re: [CODE4LIB] calling another webpage within CGI script

2009-11-24 Thread David Pattern
Hi Ken

Are you behind a web proxy server or firewall?  If so, you'll probably need to 
specify a proxy server in the script.

If the proxy is defined in the environment variables on the server, then you 
can use...

  my $ua = LWP::UserAgent->new( timeout => 60 );
  $ua->env_proxy();

...otherwise, you might need to hardcode it into the script...

  my $ua = LWP::UserAgent->new( timeout => 60 );
  $ua->proxy(['http'], 'http://squid.wittenberg.edu:3128');

(replace "squid.wittenberg.edu:3128" with whatever the proxy server name and 
port number actually are)

regards
Dave Pattern
University of Huddersfield


From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Ken Irwin 
[kir...@wittenberg.edu]
Sent: 23 November 2009 19:41
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] calling another webpage within CGI script

Hi Joe,

That's really helpful, thanks.
Actually finding out what the error message is nice:

HTTP Error : 500 Can't connect to www.npr.org:80 (connect: Permission denied)

I've tried this with a few websites and always get the same error, which tells 
me that the problem is on my server side. Any idea what I can change so I don't 
get a permission-denied rejection? I'm not even sure what system I should be 
looking at.

I tried Vishwam's suggestion of granting 777 permissions to both the file and 
the directory and I get the same response.

Is there some Apache setting someplace that says "hey, don't you go making web 
calls while I'm in charge"?

(This is a Fedora server running Apache, btw).

I don't know what to poke at!

Ken


---
This transmission is confidential and may be legally privileged. If you receive 
it in error, please notify us immediately by e-mail and remove it from your 
system. If the content of this e-mail does not relate to the business of the 
University of Huddersfield, then we do not endorse it and will accept no 
liability.


[CODE4LIB] developer competition - library usage data

2009-07-31 Thread David Pattern
Hi everyone

Just a quick plug for a developer competition that's being run by the JISC 
funded MOSAIC ("Making Our Shared Activity Information Count") Project in the 
UK: http://www.sero.co.uk/jisc-mosaic-competition.html

The usage data can be found for download via: 
http://library.hud.ac.uk/wikis/mosaic/index.php/Project_Data

...and the "Data Collection Guide" explains the XML format of the data: 
http://library.hud.ac.uk/wikis/mosaic/index.php/Project_Documentation

At present, it's just book usage data from the University of Huddersfield 
that's available to play around with, but we're hoping that it will be joined 
by some usage data from a few other UK academic libraries in due course.

As the XML usage data files are pretty big (i.e. opening them in Internet 
Explorer is a sure fire way of killing a PC!), I've put together a quick & 
dirty API for grabbing subsets of the data: 
http://www.daveyp.com/blog/archives/953

Tony Hirst (from the Open University in the UK) has done a helpful blog post 
here: http://bit.ly/8RjYU

In terms of the competition, it's open to anyone (i.e. not just developers 
based in the UK).  The prizes are in UK sterling and the competition is being 
run according to UK law (just in case competition laws vary from country to 
country).

Have fun!

Dave Pattern
Library Systems Manager
University of Huddersfield

http://www.hud.ac.uk/images/emails/neutral_navy_blue_003976.gif"; 
alt="Inspiring tomorrow's professionals">
---
This transmission is confidential and may be legally privileged. If you receive 
it in error, please notify us immediately by e-mail and remove it from your 
system. If the content of this e-mail does not relate to the business of the 
University of Huddersfield, then we do not endorse it and will accept no 
liability.


[CODE4LIB] Mashed Library UK 2009 - registration now open

2009-04-30 Thread David Pattern
Hope this might be of interest to some of you.  I'm not sure how feasible it'll 
be to stream and/or video the event, but we're currently looking into it.

regards
Dave Pattern
University of Huddersfield

-

Mashed Library UK 2009: Mash Oop North!
Date: Tuesday 7th July 2009
Time: 10.00am until late afternoon
Venue: University of Huddersfield, Huddersfield, HD1 3DH
Web site: http://mashlib09.wordpress.com
Fee: £15 (ex. vat)
Speakers: Tony Hirst, Mike Ellis, Brendan Dawes, Richard Wallis and more
Primary sponsor: Talis

The first Mashed Library UK event, organised by Owen Stephens, was held at 
Birkbeck College in November 2008 with the aim of "bringing together interested 
people and doing interesting stuff with libraries and technology".  Further 
details about the 2008 event are available here: http://mashedlibrary.ning.com

The University of Huddersfield is proud to be hosting the second event, dubbed 
"Mash Oop North!", which is being sponsored by Talis.  The event will take 
place in Huddersfield on July 7th.

Mashed Library is aimed at librarians, library developers and library techies 
who want to learn more about Web 2.0 & 3.0, Library 2.0, creating mash-ups and 
generally doing interesting/cool/useful things with data.  In particular, we 
expect the event to generate the following outcomes for all attendees:

1) Awareness of the latest developments in library technology
2) Application of Web 2.0 technologies in a library context
3) Community building and networking
4) Learn new skills and develop existing ones

The event is primarily an "unconference", so attendees will be encouraged to 
participate throughout the day.  Further information is available on the event 
blog: http://mashlib09.wordpress.com

A small token registration fee of £15 is the only charge for the event.  Places 
are limited to around 60 delegates, so we would advise booking early to avoid 
disappointment!

http://www.hud.ac.uk/images/emails/neutral_navy_blue_003976.gif"; 
alt="Inspiring tomorrow's professionals">
---
This transmission is confidential and may be legally privileged. If you receive 
it in error, please notify us immediately by e-mail and remove it from your 
system. If the content of this e-mail does not relate to the business of the 
University of Huddersfield, then we do not endorse it and will accept no 
liability.


Re: [CODE4LIB] [Web4lib] A million free covers, from LibraryThing

2008-08-07 Thread David Pattern
> Publishers make their covers available to them and to others because
> they desperately want their covers out there. You can get covers from
> publishers with amazing ease. I do not suspect Amazon or Syndetics
> have licensed the covers in any way.

Having worked for a number of years for a children's library book supplier in 
the mid 1990s in the UK, I can concur with Tim -- all of the publishers we 
dealt with (which included all of the major players) were more than happy to 
supply us with book cover scans to use on our web site.  The only issue for us 
was the wide variety in quality (from tiny GIFs to massive TIFFs), so we ended 
up doing all of the cover scanning ourselves inhouse (again, the publishers 
we're happy for us do this).

On the subject of copyright, wasn't there a recent case brought against 
Google's Image Search where the judge ruled that thumbnails do not violate the 
copyright of the original image?

regards
Dave Pattern
University of Huddersfield









This transmission is confidential and may be legally privileged. If you receive 
it in error, please notify us immediately by e-mail and remove it from your 
system. If the content of this e-mail does not relate to the business of the 
University of Huddersfield, then we do not endorse it and will accept no 
liability.


Re: [CODE4LIB] perl recaptcha?

2008-07-01 Thread David Pattern
Something that's worked well for me on my blog (although it doesn't stop
100% of spam) is to require the user to tick a checkbox before
submitting the comment.  You have it box unticked by default but you can
include a bit of JavaScript to autotick it once the page has loaded, so
the majority of users don't need to do anything.

Dave Pattern
Library Systems Manager
University of Huddersfield

email: [EMAIL PROTECTED]

-Original Message-
From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
Jonathan Rochkind
Sent: 01 July 2008 14:49
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] perl recaptcha?

The Recaptcha device specifically also provides an audio test. But point

taken, even so it could prevent accessibility challenges.

Nevertheless, when my system is currently receiving around one software 
powered spam per minute, I need a quick pre-built drop-in solution to 
this; I don't have time to write my own AI!  If you have any other free 
or affordable pre-built drop-in solutions to spam protection to suggest,

this would be a great forum to do so!

My particular situation isn't even a web forum---it's a comment form 
that does nothing but send email to librarians. But the spam bots don't 
know that, and are sending 1 spam per minute to it.  "Pre-moderation" is

not a solution; that's what we're doing now, but we can't afford to hire

an FTE just to seperate our actual user feedback from spam!

Jonathan

 





This transmission is confidential and may be legally privileged. If you receive 
it in error, please notify us immediately by e-mail and remove it from your 
system. If the content of this e-mail does not relate to the business of the 
University of Huddersfield, then we do not endorse it and will accept no 
liability.


Re: [CODE4LIB] facebook

2008-01-07 Thread David Pattern
Thanks Eric!

I had a little mess around with the Perl Facebook API last year, but
didn't get very far.

Out of interest, once you've got the first fortune, when you refresh
your FB profile does it trigger a new fortune to be sent or do you see
the same (e.g. cached) fortune?

What I want to do is put together a small application that will give the
user info from their library account, e.g.

You have 5 books on loan, and 2 of them need returning tomorrow.
Click here to go to your library account if you'd like to renew
them.

Obviously I'd need to figure out a secure and safe way of associating a
Facebook user ID with a specific library account.

regards
Dave Pattern
Library Systems Manager
University of Huddersfield


-Original Message-
From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
Eric Lease Morgan
Sent: 07 January 2008 00:55
To: CODE4LIB@listserv.nd.edu
Subject: [CODE4LIB] facebook

I am having a bit of fun with Facebook.

Last Friday I got a renewed interest in Facebook. Don't ask me why. I
don't know. I do know though that syndicating library content to
social networks (Facebook, MySpace, Delicious, etc.) seems to be a
rage. To that end I have taken a stab at writing a few Facebook
applications, and below is the simplest one shared here in the hopes
other (Perl) hackers don't spin their wheels as much as I did.

...snip...

--
Eric Lease Morgan
University Libraries of Notre Dame






This transmission is confidential and may be legally privileged. If you receive 
it in error, please notify us immediately by e-mail and remove it from your 
system. If the content of this e-mail does not relate to the business of the 
University of Huddersfield, then we do not endorse it and will accept no 
liability.


[CODE4LIB] OPAC survey - initial findings

2007-04-16 Thread David Pattern
Hi all!

Many thanks to everyone who responded to the recent OPAC survey -- in
total there were 729 responses.

I'll be publishing an informal PDF report sometime around the end of
May, but I've already stated adding data, graphs and initial findings to
my weblog.  I'd love to know if there are any surprises in the findings,
or if you think it's just telling you what you already know!

http://www.daveyp.com/blog/index.php/archives/205/
http://www.daveyp.com/blog/index.php/archives/206/
http://www.daveyp.com/blog/index.php/archives/207/
http://www.daveyp.com/blog/index.php/archives/208/
http://www.daveyp.com/blog/index.php/archives/209/
http://www.daveyp.com/blog/index.php/archives/210/

The comments from respondents (which will be included in the PDF report)
ran the whole spectrum of opinion -- from those who thought the OPAC is
already a defunct technology, to those who obviously feel that their
OPAC should be nothing more than an electronic version of a card
catalog.

regards
Dave Pattern
University of Huddersfield

p.s. apologies for cross-posting this to several lists






This transmission is confidential and may be legally privileged. If you receive 
it in error, please notify us immediately by e-mail and remove it from your 
system. If the content of this e-mail does not relate to the business of the 
University of Huddersfield, then we do not endorse it and will accept no 
liability.


Re: [CODE4LIB] pspell aspell: make your own word lists/dictionaries

2007-04-03 Thread David Pattern
Hi Kevin

We've been using aspell for just over a year using a similar method to the one 
you've outlined.  The command line I've been using to build the custom 
dictionary (on a Windows box) is:

aspell.exe --lang=en_GB create master ./title.list < titlewords.txt

...where "titlewords.txt" is a file containing the unique words from the item 
titles (with each word on a separate line) and "title.list" is the dictionary 
file that gets created.

Unfortunately I did our implementation in mod_perl, so I'm not sure how you go 
about getting PHP to pick up a custom dictionary.  Anyway, using the Perl 
Text::Aspell module, our code contains:

my $speller = Text::Aspell->new;
$speller->set_option('sug-mode','ultra');
$speller->set_option('master','/Apache2/modperl/HIP/title.list');

my @suggestions = $speller->suggest( $word );

If you want to see it in action, try these:
http://library.hud.ac.uk/catlink/title/newmonia 

http://library.hud.ac.uk/catlink/author/newmonia 

http://library.hud.ac.uk/catlink/title/gibberish 


...also, be aware that using your own custom dictionaries might highlight the 
typos in some of your MARC records!

http://library.hud.ac.uk/catlink/general/suckcesful 


regards
Dave Pattern
University of Huddersfield







From: Code for Libraries on behalf of Kevin Kierans
Sent: Tue 4/3/2007 5:40 PM
To: CODE4LIB@listserv.nd.edu
Subject: [CODE4LIB] pspell aspell: make your own word lists/dictionaries



Has anyone created their own "dictionaries"
for aspell?  We've created blank delimited
lists of words from our opac.  One for title,
one for subjects, and one for authors.  (We're thinking
of a series one as well)

We would like to use
one of these word lists to offer suggestions
depending on which search the patron is making.
We're assuming we can make better suggestions
if the words come from our actual opac.

We've got it working with the dictionary that
comes with aspell, but having problems (we can't do it!)
substituting our own  "dictionaries."

Does anyone have any experience/knowledge/hints/pointers
they can share with us?

We are using linux, php 5,  aspell 0.50.5, and
php -> pspell functions.

Thanks,
Kevin
TNRD Library System, Kamloops, British Columbia, Canada









This transmission is confidential and may be legally privileged. If you receive 
it in error, please notify us immediately by e-mail and remove it from your 
system. If the content of this e-mail does not relate to the business of the 
University of Huddersfield, then we do not endorse it and will accept no 
liability.


[CODE4LIB] quick OPAC survey

2007-03-27 Thread David Pattern
Hi everyone

I'm running a brief informal survey about web based OPACs, their ease of
use, and importance of various "2.0" features.  It would be great to get
as many responses as possible, especially from Librarians and staff who
are involved with the administration and/or development of the OPAC at
their own library.

If you have a couple of minutes to spare, then please consider
responding:

http://www.daveyp.com/blog/stuff/opac.html

I'll post the final results before I set off for the Library and
Information Show UK in mid April, but there are already some interesting
trends appearing!

regards
Dave Pattern
Library Systems Manager
University of Huddersfield






This transmission is confidential and may be legally privileged. If you receive 
it in error, please notify us immediately by e-mail and remove it from your 
system. If the content of this e-mail does not relate to the business of the 
University of Huddersfield, then we do not endorse it and will accept no 
liability.


Re: [CODE4LIB] munging wikimedia

2006-09-10 Thread David Pattern
Hi Eric
 
The best place to look is probably 
http://meta.wikimedia.org/wiki/Alternative_parsers 
 
I'm guessing the "non-parser dumper", which uses MediaWiki's internal code to 
the do rendering, might be the a good choice.
 
regards
Dave Pattern
University of Huddersfield
 



From: Code for Libraries on behalf of Eric Lease Morgan
Sent: Sun 10/09/2006 14:28
To: CODE4LIB@listserv.nd.edu
Subject: [CODE4LIB] munging wikimedia



How do I go about munging wikimedia content?

After realizing that downloadable data dumps of Wikipedia are sorted
by language code, I was able to acquire the 1.6 GB compressed data,
uncompress it, parse it with Parse::MediaWikiDump, and output things
like article title and article text.

The text contains all sorts of wikimedia mark-up: [[]], \\, #, ==, *,
etc. I suppose someone has already written something that converts
this markup into HTML and/or plain text, but I can't find anything.

If you were to get the Wikipeda content, cache it locally, index it,
and provide access to the index, then how would you deal with the
Wiki mark-up?

--
Eric Lease Morgan
University Libraries of Notre Dame



This transmission is confidential and may be legally privileged. If you receive 
it in error, please notify us immediately by e-mail and remove it from your 
system. If the content of this e-mail does not relate to the business of the 
University of Huddersfield, then we do not endorse it and will accept no 
liability.


Re: [CODE4LIB] external linking to your images

2006-03-31 Thread David Pattern
Hi Eric

I'm a little rusty on Apache's Rewrite rules, but here's what I've got
set up in a .htaccess file (see
http://httpd.apache.org/docs/1.3/mod/mod_rewrite.html):

RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http://216.239.*/.*$  [NC]
RewriteCond %{HTTP_REFERER} !^http://216.239.*$  [NC]
RewriteCond %{HTTP_REFERER} !^http://66.102.9.*$  [NC]
RewriteCond %{HTTP_REFERER} !^http://66.102.7.*$  [NC]
RewriteCond %{HTTP_REFERER} !^.*melanson.*$  [NC]
RewriteCond %{HTTP_REFERER} !^.*mindjack.*$  [NC]
RewriteCond %{HTTP_REFERER} !^.*google.*$  [NC]
RewriteCond %{HTTP_REFERER} !^http://daveyp.com/.*$  [NC]
RewriteCond %{HTTP_REFERER} !^http://daveyp.com$  [NC]
RewriteCond %{HTTP_REFERER} !^http://www.daveyp.com/.*$  [NC]
RewriteCond %{HTTP_REFERER} !^http://localhost/.*$  [NC]
RewriteCond %{HTTP_REFERER} !^http://www.daveyp.com$  [NC]
RewriteCond %{HTTP_REFERER} !^.*gordian.*$  [NC]
RewriteRule (.*)\.(jpg)$ http://www.daveyp.com/cgi-bin/no.pl?$1.$2
[R,NC]

The gist of the above is:

1) if no HTTP_REFERER string is sent (!^$), then allow them to see the
image
2) if any of the remaining RewriteCond's match the HTTP_REFERER, then
allow them to see them image
3) otherwise if the user is asking for a .jpg file, perform the
RewriteRule

I've used a Perl script (no.pl) to decide which image to send to the
user, but you could easily use something like the following to redirect
to a static image file:

RewriteRule (.*)\.(jpg)$ http://www.daveyp.com/no.png [R,NC]

Basically, my site is daveyp.com so that's a valid HTTP_REFERER and the
others are sites that I'm happy to use my images.

I'm no expert on Rewrites, so there might be an easier way of doing it.

Hope that helps!
Dave


> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On
> Behalf Of Eric Lease Morgan
> Sent: 31 March 2006 13:59
> To: CODE4LIB@listserv.nd.edu
> Subject: Re: [CODE4LIB] external linking to your images
>
>
> On Mar 31, 2006, at 7:37 AM, David Pattern wrote:
>
> > If I were you, I'd replace your "first home" picture with
> one showing
> > a condemned property
>
> I thought of this but I also thought it might be sort of rude
> on my part.
>
>
> > If you have admin access, then you can usually set up rules
> to limit
> > which referring sites can directly use your images.
>
> Can you ore someone else here on this list be more specific
> about rules and referring sites? How do I configure Apache to
> do such a thing? Maybe I could get trickier and redirect such
> links to a donation page, or I could authorize certain links
> and not others, but now it is probably getting more
> complicated than it needs to be.
>
> --
> Eric Morgan
>

This transmission is confidential and may be legally privileged. If you receive 
it in
error, please notify us immediately by e-mail and remove it from your system. If
the content of this e-mail does not relate to the business of the University of
Huddersfield, then we do not endorse it and will accept no liability.


Re: [CODE4LIB] external linking to your images

2006-03-31 Thread David Pattern
Hi Eric

If I were you, I'd replace your "first home" picture with one showing a
condemned property (e.g.
http://www.annistonstar.com/gallery/2004/year_end/2004_sg45.jpg) so that
it appears on their web site.

If you have admin access, then you can usually set up rules to limit
which referring sites can directly use your images.

In my spare time I run a fairly popular DVD site and I often get ebay
vendors linking directly to DVD cover scans.  So, I set up an Apache
rule that should replace the image with one that suggests that if the
vendor is happy to steal someone else's bandwidth they might also by
happy to steal your money :-)

regards
Dave Pattern
Library Systems Manager
Computing & Library Services
University of Huddersfield



> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On
> Behalf Of Eric Lease Morgan
> Sent: 31 March 2006 13:08
> To: CODE4LIB@listserv.nd.edu
> Subject: Re: [CODE4LIB] external linking to your images
>
> Yep, this is exactly what is happening.
>
> People are linking to images directly from my site. They are
> sort of "hijacking" the images, and when loaded they use my
> hard disk, my processing power, and my network connection to
> make it happen. This reduces the amount of resources for my
> machine's more primary tasks. Mind you, it would be difficult
> for me to measure the resource usage, and as a librarian, I
> might say, "So what?" On the other hand sometimes people make
> fun of me and my images. Other times the images are put into
> an undesirable context too gross to even mention on a mailing list.
>
> Here is a less inocuous instance. Below is a URL. It
> describes some sort of mortgage service. On the page is a
> picture of a house. I took that picture and titled it "first
> home". When you search Google Images for "first home" this
> picture shows up as item #2:
>
   http://www.dynastymortgageteam.com/

To what degree are the people at dynastymortgageteam.com taking
advantage of me and the system? To what degree are the norms of Internet
behavior too new to determine the answer to that question? What about
those other people who link to me for "personal use?" While it isn't
scholarship, maybe I should be "cited" and have a link back to my home
page and be granted attribution. Does anybody else remember an Internet
adage that said, "If you don't want it copied, then don't put it on the
Internet."

These are things I wonder about.

Finally, I consider refusing to serving images to external referrer's,
but again, some of my professional ethics get in the way. (BTW, how
would I go about doing such a thing?)

--
Eric Morgan

This transmission is confidential and may be legally privileged. If you receive 
it in
error, please notify us immediately by e-mail and remove it from your system. If
the content of this e-mail does not relate to the business of the University of
Huddersfield, then we do not endorse it and will accept no liability.