Re: [CODE4LIB] Conference "all-timers?"

2013-02-15 Thread Andrew Nagy
Around where I was sitting - there was myself, Dan Chudnov and Karen Coombs.


On Fri, Feb 15, 2013 at 9:53 AM, Michael J. Giarlo <
leftw...@alumni.rutgers.edu> wrote:

> Hi,
>
> Every year when hands shoot up in response to the question of "how many of
> you have attended all code4lib conferences?", I neglect to note who's
> raising those hands.
>
> Who are my fellow all-timers?
>
> -Mike
>


Re: [CODE4LIB] 2013 Code4lib Conference Registration (Change of time)

2012-11-28 Thread Andrew Nagy
Will there be reserved registration slots for speakers, or do they need to
be on ready to register 2 minutes before noon-eastern like a Bruce
Springstein concert?

-- Forwarded message --
From: Francis Kayiwa 
Date: Tue, Nov 27, 2012 at 1:16 PM
Subject: [CODE4LIB] 2013 Code4lib Conference Registration (Change of time)
To: CODE4LIB@listserv.nd.edu


Looks like quite a few of you missed the change of Registration date. If
you have registered today you did so on the "Test Server" and will need
to register next week.

Registration was moved to December 4th at noon Eastern Standard Time.

regards,
./fxk
--
Documentation is the castor oil of programming.  Managers know it must
be good because the programmers hate it so much.


[CODE4LIB] New Newcomer Dinner option

2012-02-03 Thread Andrew Nagy
Hi All - I just added another restaurant option to the newcomer dinner list
as the options are starting to look quite full.  I've listed Momiji - a new
japanese restaurant that I have been wanting to try and a very short cab
ride from the hotel.  If anyone signs up, I'll make a reservation.

Andrew


Re: [CODE4LIB] Voting is open for code4lib 2012 presentations.

2011-11-22 Thread Andrew Nagy
My votes are not showing after returning to the voting page.  I thought I
remembered being able to modify my votes from previous years.  I went
through the first 30 or so, and wanted to come back to it to go through
more, but my votes are not persisting.  Is this a bug, a change, or a
failure in my memory?

Andrew

On Tue, Nov 22, 2011 at 2:14 PM, Michael J. Giarlo <
leftw...@alumni.rutgers.edu> wrote:

> POWERED BY DIEBOLD
>
>
> On Tue, Nov 22, 2011 at 14:08, Michael B. Klein  wrote:
> > Hmm. 404'ing for me now.
> >
> > On Nov 22, 2011, at 4:22 AM, Ross Singer  wrote:
> >
> >> Ok, the results screen should no longer be throwing an error.
> >>
> >> Vote early, vote often,
> >> -Ross.
> >>
> >> On Tue, Nov 22, 2011 at 6:57 AM, Ross Singer 
> wrote:
> >>> Mark, I'm only getting that for the "results" page.  Are you getting it
> >>> somewhere else?
> >>>
> >>> I'll fix the results page as soon as I can.
> >>>
> >>> -Ross.
> >>>
> >>> On Monday, November 21, 2011, Mark Diggory 
> wrote:
>  The ever popular...Internal Server Error
>  On Mon, Nov 21, 2011 at 7:34 PM, Anjanette Young
>  wrote:
> 
> > Voting for code4lib 2012 talks are now open.
> >
> > Voting will close at 5pm (PST) on December 9, 2011.
> >
> > Presentation criteria to keep in mind
> >
> >
> >- Usefulness
> >- Newness
> >- Geekiness
> >- Diversity of topics
> >
> > http://vote.code4lib.org/election/21 -- You will need your
> > code4lib.orglogin in order to vote. If you do not have one you can
> create
> > one at
> > http://code4lib.org/
> >
> > Presentation proposal descriptions can be found on the wiki
> >
> > http://wiki.code4lib.org/index.php/2012_talks_proposals
> >
> > Thank you to Ross Singer for keying in all 72 proposals!
> >
> > --Anjanette
> >
> >  --
> > You received this message because you are subscribed to the Google
> Groups
> > "code4libcon" group.
> > To post to this group, send email to code4lib...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > code4libcon+unsubscr...@googlegroups.com.
> > For more options, visit this group at
> > http://groups.google.com/group/code4libcon?hl=en.
> >
> 
> 
> 
>  --
>  [image: @mire Inc.]
>  *Mark Diggory*
>  *2888 Loker Avenue East, Suite 305, Carlsbad, CA. 92010*
>  *Esperantolaan 4, Heverlee 3001, Belgium*
>  http://www.atmire.com
> 
> >
>


Re: [CODE4LIB] Code4lib 2012 Seattle. Call for presentation proposals

2011-10-07 Thread Andrew Nagy
I'd like to hear more about the DPLA project - I hope we get a proposal
about that this year!  I'll post it to the wiki page.

Andrew

On Wed, Oct 5, 2011 at 6:17 PM, Anjanette Young wrote:

> Code4lib 2012 call for proposals.
>
> We are now accepting proposals for Code4lib 2012.
>
> Code4lib 2012 is a loosely-structured conference for library technologists
> to commune, gather/create/share ideas and software, be inspired, and forge
> collaborations.  The conference will be held Monday February 6th
> (Preconference Day) - Thursday February 9th, 2012 in Seattle, WA. More
> information can be found at http://code4lib.org/conference/2012/
>
> Prepared Talks
>
> Head over to the call for proposals page at
> http://wiki.code4lib.org/index.php/2012_talks_proposals and submit your
> idea
> for a prepared talk for this year's conference!  Proposals should be no
> longer than 500 words, and preferably many less.
>
> Prepared talks are 20 minutes (including setup and questions), and focus on
> one or more of the following areas:
>  * tools (some cool new software, software library or integration platform)
>  * specs (how to get the most out of some protocols, or proposals for new
> ones)
>  * challenges (one or more big problems we should collectively address)
>
> The community will vote on proposals using the criteria of:
>  * usefulness
>  * newness
>  * geekiness
>  * diversity of topics
>  * awesomeness
>
> Proposals can be submitted through Sunday, November 19th, 5pm (PST). Voting
> will commence soon thereafter and be open through Friday, December 9th.
> Successful candidates will be notified by December 12th. The submitter (and
> if necessary a second presenter) will be guaranteed an opportunity to
> register for the conference through December 23st.
>
> Proposals for preconferences are also open until November 19th, 5pm (PST).
> http://wiki.code4lib.org/index.php/2012_preconference_proposals
>
> We cannot accept every prepared talk proposal, but multiple lightning talk
> and breakout sessions will provide everyone who wishes to present with an
> opportunity to do so.
>
> --Anj
> Anjanette Young | Systems Librarian
> University of Washington Libraries
> Box 352900 | Seattle, WA 98195
> Phone: 206.616.2867
>


Re: [CODE4LIB] Code4Lib Community google custom search

2011-10-06 Thread Andrew Nagy
Nice job Jonathan - my first test search seemed to bring back rather
relevant materials with the first coming from the journal:
http://www.google.com/cse?cref=http%3A%2F%2Fcode4lib.org%2Ftest%2Fgoogle_cse_context.xml&q=virtual+reference&sa=Search&siteurl=www.code4lib.org%2Fcustom_search%2Fsearch_form.html#gsc.tab=0&gsc.q=virtual%20reference&gsc.page=1

Very cool and very useful

Andrew

On Thu, Oct 6, 2011 at 9:35 PM, Jonathan Rochkind  wrote:

> So I was in #code4lib, and skome asked about ideas for library hours. And I
> recalled that there have been at least two articles in the C4L Journal on
> this topic, so suggested them.
>
> Then I realized that there's enough body of work in the Journal to be worth
> searching there whenever you have an "ideas for dealing with X" question.
> You might not find anything, but I think there's enough chance you will,
> illustrated by that encounter with skome.
>
> Then I realized it's not just the journal -- what about a Google Custom
> Search that searches over the Journal, the Code4Lib wiki, the Code4Lib
> website, and perhaps most interestinly -- all the sites listed in Planet
> Code4Lib.
>
> Then I made it happen. Cause it seemed interesting and I'm a perfectionist,
> I even set things up so a cronjob automatically syncs the list of sites in
> the Planet with the Google custom search every night.
>
> The Planet stuff ends up potentially being a lot of noise -- I tried to
> custom 'boost' stuff from the Journal, but I'm not sure it worked. But I did
> configure things with facet-like limits including a "just the planet" limit,
> if you do want that. But even though it's sometimes a lot of noise, it's
> also potentially the most interesting/useful part of the search, otherwise
> it'd pretty much just be a Journal search, but now it includes a bunch of
> people's blogs, as well as other sites deemed of interest to Code4Lib
> community (including a couple other open source library tech journals) --
> without any extra curatorial work, just using the list already compiled for
> the Planet.
>
> I'm curious what people think of it. Try some searches for library tech
> questions or information and see how good your results are. If people find
> this useful, I'll try to include it on the main code4lib.org webpage in
> some prominent place, spruce up the look and feel etc. (Or try to draft
> someone else to do that, I think my time to work on this might be _just_
> about up after staying until 9.30 hacking on this cause it seemed cool).
>
> http://www.code4lib.org/**custom_search/search_form.html
>


Re: [CODE4LIB] 2012 preconference proposals wanted!

2011-09-26 Thread Andrew Nagy
Is anyone leading this session or is a free for all?  Code4lib site is down
- so I can't see whats on the wiki.

We use Git very heavily with the engineering of Serials Solutions' Summon
and we'd be happy to have an engineer do a session on some of the ways we
use it on a fairly large project/codebase if the group is interested.

Thanks
Andrew

On Fri, Sep 23, 2011 at 12:17 PM, Rob Casson  wrote:

> youse_guys++
>
> looking forward to it
>
> On Fri, Sep 23, 2011 at 11:46 AM, Cary Gordon 
> wrote:
> > Afternoon is great. I am willing to help present.
> >
> > I am not excited about doing a git /subversion comparison, and would
> > rather see the time filled with git specific info. There is certainly
> > enough of it to keep us busy.
> >
> > I am not a raconteur, but a couple years ago, when the Drupal
> > migration from CVS was in its nascent stage, I was walking Dries
> > Buytaert back to his hotel... on Rue Git in Paris. He asked if I
> > though that was portentous. I said it was bzr.
> >
> > Thanks,
> >
> > Cary
> >
> > On Fri, Sep 23, 2011 at 7:47 AM, Ian Walls
> >  wrote:
> >> Cool, I'll add this to the wiki, then.
> >>
> >> Anyone prefer morning v. afternoon?  Afternoon is currently empty, so I
> >> figure it'd make sense to default there for now.  Unless folks want to
> talk
> >> about Git for the whole day
> >>
> >> Giving the session a cute name... "git" lends itself well to such.  I'm
> in
> >> no way wedded to the name; I may have had too much/little caffeine this
> >> morning.
> >>
> >>
> >> -Ian
> >>
> >> On Fri, Sep 23, 2011 at 10:38 AM, Kevin S. Clarke  >wrote:
> >>
> >>> On Fri, Sep 23, 2011 at 10:02 AM, Ian Walls
> >>>  wrote:
> >>> > If we still need someone to take the lead on this, I would
> >>> > volunteer.
> >>>
> >>> I don't believe anyone else has volunteered to lead so if you want to
> >>> do it, run with it!
> >>>
> >>> I'd be glad to do a quick bit on how easy it is to use gitolite for
> >>> private git repositories, if there is time for it (with all the other
> >>> good git topics that have been suggested).
> >>>
> >>> Thanks,
> >>> Kevin
> >>>
> >>
> >>
> >>
> >> --
> >> Ian Walls
> >> Lead Development Specialist
> >> ByWater Solutions
> >> Phone # (888) 900-8944
> >> http://bywatersolutions.com
> >> ian.wa...@bywatersolutions.com
> >> Twitter: @sekjal
> >>
> >
> >
> >
> > --
> > Cary Gordon
> > The Cherry Hill Company
> > http://chillco.com
> >
>


Re: [CODE4LIB] Code4Lib 2012 Seattle Update.

2011-06-10 Thread Andrew Nagy
Hi Anj - I just wanted to let you know that Serials Solutions is working out
a plan to better support the conference.  We'd possibly like to sponsor an
evening event, we will have more information for you later in the summer.

Cheers
Andrew


On Tue, Jun 7, 2011 at 1:14 PM, Anjanette Young wrote:

> Code4Lib Seattle 2012 update.  Thanks to Elizabeth Duell of Orbis Cascade
> Alliance and Cary Gordon of chillco.com, we finally have a venue with
> adequate (hopefully) bandwidth and wireless access points, a reasonable
> food
> & beverage minimum, and chairs!  The Renaissance Hotel (515 Madison St.,
> Seattle, WA 98104) is located in the chilly heart of downtown Seattle,
> still
> close to the University district, but even closer to the restaurants, bars,
> breweries and distilleries in the Belltown, Downtown, Pioneer Square, and
> Capitol Hill neighborhoods.
>
> We could use lots of help, please consider volunteering for a committee:
>
> http://wiki.code4lib.org/index.php/2012_committees_sign-up_page
>
> --Anj
> --
> Anjanette Young | Systems Librarian
> University of Washington Libraries
> Box 352900 | Seattle, WA 98195
> Phone: 206.616.2867
>


Re: [CODE4LIB] Adding VIAF links to Wikipedia

2011-05-26 Thread Andrew Nagy
Ralph - this sounds like a very valuable process.  I would imagine it could
solve the problem illustrated here:
http://journal.code4lib.org/articles/57

What would be the best path forward?  Im not active in the wikipedia
community - but I understand that their is a community of editors.  Perhaps
lobbying them for support while clearly identifying the value for community
of scholarship would allow this to happen?

Does anyone have experience with the editorial group or policy group in the
wikipedia community?

Cheers
Andrew

On Thu, May 26, 2011 at 2:01 PM, Ralph LeVan  wrote:

> OCLC Research would desperately love to add VIAF links to Wikipedia
> articles, but it seems to be very difficult.  The OpenLibrary folks tried
> to
> do it a while back and ended up getting their plans severely curtailed.
>  The
> discussion at Wikipedia is captured here:
>
> http://en.wikipedia.org/wiki/Wikipedia:Bots/Requests_for_approval/OpenlibraryBot
>
> Probably for very good reasons, this seems to be a very political process.
>  That means we need to have pretty good support both within and outside
> the Wikipedia community to do this.
>
> Starting with the friendliest community I can think of, is there such
> support?  Should we move forward on creating a ViafBot to stick VIAF links
> into Wikipedia?
>
> Thanks!
>
> Ralph
>


Re: [CODE4LIB] dealing with Summon

2011-03-01 Thread Andrew Nagy
Hi Godmar - to help answer some of your questions about the fields - I can
help address those directly.  Though it would be interesting to hear
experiences from others who are working from APIs to search systems such as
Summon or others.

In regards to the publication date - the Summon API has the "raw date"
(which comes directly from the content provider), but we also provide a
field with a microformat containing the parsed and cleaned date that Summon
has generated.  We advise for you to use our parsed and cleaned date rather
than the raw date.  The URL and URI fields are similar, the URL is the link
that we have generated - the URI is what is provided by the content
provider.  In your case, you appear to be referring to OPAC records, so the
URI is the ToC that came from the 856$u field in your MARC records.  The URL
is a link to the record in the OPAC.

If you need more assistance around the fields that are available via Summon,
I'd be happy to take this conversation off-list.

I think an interesting conversation for the Code4Lib community would be
around a standardized approach for an API that meets both the needs of the
library developer and the product vendor.  I recall a brief chat I had with
Annette about this same topic at a NISO conference in Boston a while back.
For example, we have SRU/W, but that does not provide support for all of the
features that a search engine would need (ie. facets, spelling corrections,
recommendations, etc.).  Maybe a new standard is needed - or maybe extending
an existing one would solve this need?  I'm all ears if you have any ideas.

Andrew


On Tue, Mar 1, 2011 at 2:14 PM, Godmar Back  wrote:

> Hi -
>
> this is a comment/question about a particular discovery system
> (Summon), but perhaps of more general interest. It's not intended as
> flamebait or criticism of the vendor or people associated with it.
>
> When integrating Summon into LibX (which works quite nicely btw,
> gratuitous screenshot is attached to this email) I found myself amazed
> by the multitude of possible fields and combinations returned in the
> resulting records. For instance, some records contains fields 'url'
> (lower case), and/or 'URL' (upper case), and/or 'URI' (upper case).
> Which one to display, and how?  For instance, some records contain an
> OPAC URL in the 'url' field, and a ToC link in the URI field. Why?
>
> Similarly, the date associated with a record can come in a variety of
> formats. Some are single-field (20080901), some are abbreviated
> (200811), some are separated into year, month, date, etc.  Some
> records have a mixture of those.
>
> My question is how do other adopters of Summon, or of emerging
> discovery systems that provide direct access to their records in
> general, deal with the roughness of the records being returned?  Are
> there best practices in how to extract information from them, and in
> how to prioritize relevant and weed out irrelevant or redundant
> information?
>
>  - Godmar
>


Re: [CODE4LIB] Ride sharing IND - Bloomington - IND

2010-12-17 Thread Andrew Nagy
To help better track ride share opportunities, I created a page on the
Code4Lib wiki.
http://wiki.code4lib.org/index.php/C4L2011_rideshare#Indianapolis_International_Airport

This way folks seeking ride share opportunities can sign up for a ride - and
those offering can list their ride.

Andrew

On Thu, Dec 16, 2010 at 5:52 PM, Cary Gordon  wrote:

> I will be renting a car and driving to Bloomington on Sunday, the 6th
> at about 630 PM (assuming on-time arrival at 6ish) and returning on
> the 10th in time to make my 7 PM flight.
>
> I can take one or two people with a reasonable amount of luggage each
> way, and no, they don't have to be the same people.
>
> Let me know if you are interested.
>
> Thanks,
>
> Cary
>
> --
> Cary Gordon
> The Cherry Hill Company
> http://chillco.com
>


Re: [CODE4LIB] algorithm for Summon's Recommender

2010-05-06 Thread Andrew Nagy
Hi Ya'aqov - I'm about to board a plane so I don't have much time for
a well formed response.  We do not have anything published about
Summon's relevancy algorithms nor the recommendation engine.  I'd be
happy to answer any specific questions offline as I don't feel it
appropriate to get into details about a commericial product in this
channel.

Andrew

On 5/6/10, Ziso, Ya'aqov  wrote:
> hi Andrew,
>
> bX derives from research done at Los  Alamos National Laboratory by Johan
> Bollen and Herbert Van de Sompel. Its ranking and algorithm can be analyzed
> in the published article
> http://www.slideshare.net/hvdsomp/the-bx-project-federating-and-mining-usage-logs-from-linking-servers
> Can SerialsSolutions point us to something explaining Summon’s Recommender?
>
> ==
>
> yaaq...@gmail.com
> •  If you're not part of the problem, you're not part of the solution •
>
>
>

-- 
Sent from my mobile device


Re: [CODE4LIB] Q: what is the best open source native XML database

2010-01-17 Thread Andrew Nagy
I've had the best luck with eXist and BerkeleyDB XML.

Both support XQuery and have indexing features based on any XML structure.

Andrew

On 1/16/10, Godmar Back  wrote:
> Hi,
>
> we're currently looking for an XML database to store a variety of
> small-to-medium sized XML documents. The XML documents are
> unstructured in the sense that they do not follow a schema or DTD, and
> that their structure will be changing over time. We'll need to do
> efficient searching based on elements, attributes, and full text
> within text content. More importantly, the documents are mutable.
> We'll like to bring documents or fragments into memory in a DOM
> representation, manipulate them, then put them back into the database.
> Ideally, this should be done in a transaction-like manner. We need to
> efficiently serve document fragments over HTTP, ideally in a manner
> that allows for scaling through replication. We would prefer strong
> support for Java integration, but it's not a must.
>
> Have other encountered similar problems, and what have you been using?
>
> So far, we're researching: eXist-DB (http://exist.sourceforge.net/ ),
> Base-X (http://www.basex.org/ ), MonetDB/XQuery
> (http://www.monetdb.nl/XQuery/ ), Sedna
> (http://modis.ispras.ru/sedna/index.html ). Wikipedia lists a few
> others here: http://en.wikipedia.org/wiki/XML_database
> I'm wondering to what extent systems such as Lucene, or even digital
> object repositories such as Fedora could be coaxed into this usage
> scenario.
>
> Thanks for any insight you have or experience you can share.
>
>  - Godmar
>

-- 
Sent from my mobile device


Re: [CODE4LIB] Suggest a keynote speaker for Code4Lib 2010!

2009-07-23 Thread Andrew Nagy
I'd also be happy to nominate my old boss Joe Lucia at Villanova.  He is a
Library Director who fully supports Open Source software and speaks on it
from time to time.  He was the keynote speaker at the recent Evergreen
conference.

Andrew

On Thu, Jul 23, 2009 at 10:24 AM, Andreas Orphanides <
andreas_orphani...@ncsu.edu> wrote:

> Hi folks,
>
> The time has come once again to commence discussion of possible keynote
> speakers for the upcoming Code4Lib 2010 conference in Asheville!
>
> If you've got any suggestions for a speaker who'd be engaging,
> knowledgeable, and foolhardy enough to accept this high honor, throw their
> names to the list for discussion.
>
> We here at Code4Lib 2010 World Headquarters, deep under the sea, will
> accept nominations until *September 16, 2009*. Shortly thereafter we will
> open the polls for online voting.
>
> All suggestions and comments are welcome! Discuss away!
>
> Andreas Orphanides
> Code4Lib 2010 Keynote Speakers Committee
>


Re: [CODE4LIB] Suggest a keynote speaker for Code4Lib 2010!

2009-07-23 Thread Andrew Nagy
Stallman would be incredible!  Watch the movie Revolution OS if you haven't
yet.

On Thu, Jul 23, 2009 at 11:11 AM, Ranti Junus  wrote:

> I think Richard Stallman would be interesting. Just make sure somebody
> is ready to drag him away when his time is up. He's a, er, very
> passionate speaker.
>
>
> ranti.
>
> --
> Bulk mail.  Postage paid.
>


Re: [CODE4LIB] David Walker Wins Third OCLC Research Software Contest

2009-07-22 Thread Andrew Nagy
david_walker++

Just watched the video - great job David!

On Wed, Jul 22, 2009 at 9:01 PM, Roy Tennant  wrote:

> DUBLIN, Ohio, USA, 22 July 2009
>
> David Walker Wins Third OCLC Research Software Contest
>
> David Walker has won the Third OCLC Research Software Contest with Bridge,
> a
> set of services to provide a configurable and customizable full record
> display made up of WorldCat services.  These services provide the ability
> for an individual library to customize the full record display of WorldCat
> records to their particular situation.
>
> The contest judges were impressed with how Mr. Walker was able to provide a
> set of very useful methods to enhance WorldCat services from the
> perspective
> of individual libraries. The software architecture, code, and documentation
> also were impressive. As the contest winner, Mr. Walker will receive a
> check
> for $2,500 and a visit with OCLC researchers and others in Dublin, Ohio
> (USA).
>
> David Walker is Library Web Services Manager at California State
> University.
> More information about Bridge is linked below.
>
> The Third OCLC Research Software Contest ran from mid-April through the end
> of June.  Its goal was to encourage innovation in the use of OCLC web-based
> services for libraries.
>
> Entries were judged by a panel of expert practitioners and academicians
> from
> OCLC and the library/information community:
>
> Kevin Clarke
> Coordinator of Web Services
> Belk Library and Information Commons
> Appalachian State University
>
> Thom Hickey
> Chief Scientist
> OCLC
>
> Tod Matola
> Software Architect
> OCLC
>
> Ross Singer
> Interoperability and Open Standards Champion
> Talis
> and winner of the Second OCLC Research Software Contest
>
> Roy Tennant
> Senior Program Officer
> OCLC Research
>
>
> More information:
>
> David Walker's Bridge
> http://library.calstate.edu/bridge/
>
> Contest Overview
> http://www.oclc.org/research/researchworks/contest/
>
> Contest judges
> http://www.oclc.org/research/researchworks/contest/judges.htm
>
> Contacts:
>
> Roy Tennant
> Senior Program Officer
> OCLC Research
> roy_tenn...@oclc.org
> +1-707-287-5580
>
> Robert Bolander
> Senior Communications Officer
> OCLC Research
> bolan...@oclc.org
> +1-614-761-5207
>


Re: [CODE4LIB] tricky mod_rewrite

2009-07-01 Thread Andrew Nagy
You probably could if you get real tricky with the regex code - but I would
say probably not.  rewrite takes the entire url into consideration - so you
need to denote where to start with the rewrite base.

I use this for vufind:
RewriteRule ^([^/]+)/(.+)$ index.php?module=$1&action=$2 [L,QSA]

Which allows you to map:
vufind.library.edu/Search/Results
to:
vufind.library.edu/index.php?module=Search&action=Results

So with that - you could capture all of the "directories" in the URL and
just remove anything that doesn't look familiar.  But that is very hackish
since it could easily break and make it very difficult for someone to debug.

Andrew

On Wed, Jul 1, 2009 at 9:20 AM, Godmar Back  wrote:

> On Wed, Jul 1, 2009 at 9:13 AM, Peter Kiraly  wrote:
>
> > From: "Godmar Back" 
> >
> >> is it possible to write this without hardwiring the RewriteBase in it?
>  So
> >> that it can be used, for instance, in an .htaccess file from within any
> >> /path?
> >>
> >
> > Yes, you can put it into a .htaccess file, and the URL rewrite will
> > apply on that directory only.
> >
>
> You misunderstood the question; let me rephrase it:
>
> Can I write a .htaccess file without specifying the path where the script
> will be located in RewriteBase?
> For instance, consider
>
> http://code.google.com/p/tictoclookup/source/browse/trunk/standalone/.htaccess
> Here, anybody who wishes to use this code has to adapt the .htaccess file
> to
> their path and change the "RewriteBase" entry.
>
> Is it possible to write a .htaccess file that works *no matter* where it is
> located, entirely based on where it is located relative to the Apache root
> or an Apache directory?
>
>  - Godmar
>


Re: [CODE4LIB] How to access environment variables in XSL

2009-06-19 Thread Andrew Nagy
If you are using some sort of XSL processor in a programming language (java,
php, ruby) you can "assign" a variable to the xsl file and use the variable
in the file much like you would in any other scripting environment.

You can also go one step ahead and use XQuery which gives you the ability to
access a FLOWR based enviornment where you can declare variables and
introduce some more advanced logic over XSL.

Andrew

On Fri, Jun 19, 2009 at 3:44 PM, Doran, Michael D  wrote:

> I am working with some XSL pages that serve up HTML on the web.  I'm new to
> XSL.   In my prior web development, I was accustomed to being able to access
> environment variables (and their values, natch) in my CGI scripts and/or via
> Server Side Includes.  Is there an equivalent mechanism for accessing those
> environment variables within an XSL page?
>
> These are examples of the variables I'm referring to:
>SERVER_NAME
>SERVER_PORT
>HTTP_HOST
>DOCUMENT_URI
>REMOTE_ADDR
>HTTP_REFERER
>
> In a Perl CGI script, I would do something like this:
>my $server = $ENV{'SERVER_NAME'};
>
> Or in an SSI, I could do something like this:
>
>
> If it matters, I'm working in: Solaris/Apache/Tomcat
>
> I've googled this but not found anything useful yet (except for other
> people asking the same question).  Maybe I'm asking the wrong question.  Any
> help would be appreciated.
>
> -- Michael
>
> # Michael Doran, Systems Librarian
> # University of Texas at Arlington
> # 817-272-5326 office
> # 817-688-1926 mobile
> # do...@uta.edu
> # http://rocky.uta.edu/doran/
>
>


Re: [CODE4LIB] Serials Solutions Summon

2009-05-04 Thread Andrew Nagy
David - Keep in mind that aggregators are not the original publishers of
content - so even if an aggregator is not yet participating in Summon, the
content in their aggregated databases most often **is** indexed by the
service. To date there are already over 80 individual content providers
participating **in addition to** competing aggregators ProQuest and Gale,
bringing together content from over four thousand publishers.
Regardless of the competitive landscape among aggregators, publishers are
participating in Summon in order to increase discovery of their content.
It's a win-win.

Andrew


On Tue, Apr 21, 2009 at 11:33 AM, Walker, David wrote:

> Even though Summon is marketed as a Serial Solutions system, I tend to
> think of it more as coming from Proquest (the parent company, of course).
>
> Summon goes a bit beyond what Proquest and CSA have done in the past,
> loading outside publisher data, your local catalog records, and some other
> nice data (no small thing, mind you).  But, like Rob and Mike, I tend to see
> this as an evolutionary step for a database aggregator like Proquest rather
> than a revolutionary one.
>
> Obviously, database aggregators like Proquest, OCLC, and Ebsco are well
> positioned to do this kind of work.  The problem, though, is that they are
> also competitors.  At some point, if you want to have a truly unified local
> index of _all_ of your database, you're going to have to cross aggregator
> lines.  What happens then?
>
> --Dave
>
> ==
> David Walker
> Library Web Services Manager
> California State University
> http://xerxes.calstate.edu
> 
> From: Code for Libraries [code4...@listserv.nd.edu] On Behalf Of Dr R.
> Sanderson [azar...@liverpool.ac.uk]
> Sent: Tuesday, April 21, 2009 8:14 AM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] Serials Solutions Summon
>
> On Tue, 21 Apr 2009, Eric Lease Morgan wrote:
> > On Apr 21, 2009, at 10:55 AM, Dr R. Sanderson wrote:
> >> How is this 'new type' of index any different from an index of OAI-PMH
> >> harvested material?  Which in turn is no different from any other
> >> local search, just a different method of ingesting the data?
>
> > This "new type" of index is not any different in functionality from a
> > well-implemented OAI service provider with the exception of the type
> > of content it contains.
>
> Not even the type of content, just the source of the content.
> Eg SS have come to an agreement with the publishers to use their
> content, and they've stuffed it all in one big index with a nice
> interface.
>
> NTSH, Move Along...
>
> Rob
>


Re: [CODE4LIB] Serials Solutions Summon

2009-04-22 Thread Andrew Nagy
On Wed, Apr 22, 2009 at 5:08 AM, Laurence Lockton wrote:

> --
>> Date:Tue, 21 Apr 2009 13:36:30 -0400
>> From:"Diane I. Hillmann" 
>> Subject: Re: Serials Solutions Summon
>>
>>  ...
>
>> 3. Because they also have data on what journals any particular library
>> customer has subscribed to, they can customize the product for each
>> library, ensuring that the library's users aren't served a bunch of
>> results that they ultimately can't access.
>>
>
> This is one of the great advantages of a local aggregated index, being able
> to flag which documents are actually available to your users, and giving
> them the choice of searching only for these. Lund University's ELIN does
> this and it's really popular. (See a picture <
> http://people.bath.ac.uk/lislgl/elin.png>)
>
> Is this being offered in Summon and WorldCat Local?
>

Laurence - Summon does have fulltext access as well as "scholary or
peer-reviewed" as available facets in Summon to allow users to narrow their
search results by these two facets.  And it is great that you point this out
- this is one of the great benefits of having a single unified index.  You
get to pull all sorts of gems out of the boulders of content.  I am
personnally getting really excited for what our community (code4lib) will be
able to invent on top of services such as Summon.  I think we are going to
be able to find many more gems as well as mashups that allow for some
fanatastic tools.

Andrew


Re: [CODE4LIB] Serials Solutions Summon

2009-04-18 Thread Andrew Nagy
Yitzchak - I'd be more than happy to answer any questions you have about
Summon.  I will give a brief description to answer your questions - but for
any other questions you might have we can discuss offline as to not spam the
mailing list with lots of propaganda for Summon - thought it is really
awesome and everyone should purchase a subscription :)

Summon is really more than an NGC as we are selling it as a service - a
unified discovery service.  This means that it is a single repository of the
library's content ( subscription content, catalog records, IR data, etc.).
Federated search is not apart of Summon ( thought federated search could be
used along side of Summon), all of your library's content is indexed in a
single repository - no need for broadcast searching.  We have an API for
Summon that allows you to access the service with all of the features that
we offer through the Summon User Interface.  This allows you to "plug"
Summon searching into an NGC such as VuFind or Blacklight (I've done the
development for Summon integration in VuFind already).  Our company is also
working on the Summon integration for AquaBrowser.

I'd be more than happy to give a demonstration for your institution on
Summon so you can see it in action and get a better understanding.

Please email me directly for any other questions - or if you would like to
schedule a demonstation for your library.

Cheers
Andrew

On Fri, Apr 17, 2009 at 12:03 PM, Yitzchak Schaffer wrote:

> Hello all:
>
> I see that there was an Andrew Nagy-led breakout on Summon at the con.
> Summon is a NGC product with the distinction of using a local copy of
> indexes of licensed content (by agreement with Elsevier, JSTOR, et alia) for
> federated search - rather than the traditional Z39.50 or API calls to vendor
> servers.
>
> Can anyone offer a brief summary of what was discussed?  I am particularly
> interested in the feasibility of obtaining local indexes for use in an OSS
> product.
>
> Best,
>
> --
> Yitzchak Schaffer
> Systems Manager
> Touro College Libraries
> 33 West 23rd Street
> New York, NY 10010
> Tel (212) 463-0400 x5230
> Fax (212) 627-3197
> Email yitzc...@touro.edu
> Twitter /torahsyslib
>


Re: [CODE4LIB] MARC-XML -> Qualified Dublin Core XSLT

2009-03-06 Thread Andrew Nagy
Hey David - per my last posting in regards to MARCXML XSLTs - the LOC
maintains a large collection of XSLT for MARCXML that are very thorough

http://www.loc.gov/standards/marcxml/xslt/

Andrew

On Fri, Mar 6, 2009 at 3:03 PM, Walker, David  wrote:

> Hi All,
>
> Anyone have an XSLT style sheet to convert from MARC-XML to Qualified
> Dublin Core?
>
> I'm looking to load these into DSpace, if that makes a difference.  Looks
> like LOC only has MARC-XML to Simple Dublin Core.  This page [1] mentions a
>  'MARCXML to Qualified DC styles heets' developed at the University of
> Illinois, but the links are dead.
>
> --Dave
>
> [1] http://cicharvest.grainger.uiuc.edu/schemas.asp
>
> ==
> David Walker
> Library Web Services Manager
> California State University
> http://xerxes.calstate.edu
>


Re: [CODE4LIB] Printed catalogs

2009-03-06 Thread Andrew Nagy
If you do choose to use XSLT, the Library of Congress has a bunch of XSLTs
for MARCXML which will save a tremendous amount of time for you.

http://www.loc.gov/standards/marcxml/xslt/

Andrew

On Fri, Mar 6, 2009 at 1:09 PM, Jared Camins  wrote:

> Dear CODE4LIB,
>
> I think this sort of question would fall under the purview of this list,
> but
> if there's a better forum for my question, please let me know. I am
> cataloging a special collection in MARC (to take advantage of LC copy
> cataloging, primarily), but at the end of the project I will be producing a
> printed catalog for the owner of the collection. My plan is to use an XSLT
> stylesheet to produce the catalog from MARCXML. I already threw together a
> stylesheet to produce a brief HTML bibliography of the collection, so I am
> confident that this plan would work. We would probably use LaTeX rather
> than
> HTML for output for the final catalog, since that would make the final
> printing easier, not to mention index generation.
>
> My question is, has anyone done something like this? Any lessons learned
> the
> hard way, stylesheets I could model ours on, or any other advice?
>
> Thanks in advance for all your help.
>
> Regards,
> Jared Camins-Esakov
>
> P.S. I should mention that I am not entirely wed to the idea of using an
> XSLT stylesheet. It seems like the path of least resistance, but if anyone
> could suggest a better tool, I would be very interested to learn about it.
> I
> do have a background in programming, so I would be comfortable using
> C/Perl/whatever, if there were a good reason to do so.
>
> --
> Jared Camins-Esakov
> Freelance bibliographer and archivist
> (cell) +1 (917) 880-7649
> (e-mail)  jcam...@gmail.com
> (web) http://www.jaredcamins.com/
>


Re: [CODE4LIB] APIs that an OPAC should provide ...

2009-02-19 Thread Andrew Nagy
OAI-PMH!

Andrew

On Thu, Feb 19, 2009 at 4:08 PM, Matthias Einbrodt <
matthias.einbr...@meinbrodt.net> wrote:

> Hello,
>
> I'm interested in your opinion regarding the question which (kind of)
> APIs an OPAC should provide nowadays and in the near or maybe not so
> near future!
>
> Thanks in advance and best regards
>
> Matthias Einbrodt
>
>


Re: [CODE4LIB] Mime type for PHP serialized objects

2009-01-26 Thread Andrew Nagy
Correction - "text/x-php" - but again - I don't think it will make any
effect on you.

Andrew

On Tue, Dec 30, 2008 at 3:16 PM, Andrew Nagy  wrote:

> I've used "application/x-php" in the past.  I wouldn't really worry about
> it though if you are build the server and the client.  The mime type doesn't
> make that much difference.  You could even just use "text/plain".
>
> Andrew
>
> On Tue, Dec 30, 2008 at 1:55 PM, Cloutman, David  > wrote:
>
>> I have a quick question for any PHP developers out there.
>>
>> I am writing a SOA application to manage my library's events calendar.
>> The basic idea is to create a public API that our web site or other
>> community organizations can use to query and consume information. I am
>> using JSON as the default output for information, but would like to add
>> the option of outputting native serialized PHP data structures as
>> created by the serialized() function.
>>
>> My question is, what mime type should I use for serialized PHP data? The
>> best suggestion I saw through Google was application/vnd.php.serialized,
>> which was posted as a proposal. I don't know if any standard was adopted
>> though. Has anyone else thought about this issue?
>>
>> - David
>>
>>
>>
>> ---
>> David Cloutman 
>> Electronic Services Librarian
>> Marin County Free Library
>>
>> Email Disclaimer: http://www.co.marin.ca.us/nav/misc/EmailDisclaimer.cfm
>>
>
>


Re: [CODE4LIB] Mime type for PHP serialized objects

2009-01-26 Thread Andrew Nagy
I've used "application/x-php" in the past.  I wouldn't really worry about it
though if you are build the server and the client.  The mime type doesn't
make that much difference.  You could even just use "text/plain".

Andrew

On Tue, Dec 30, 2008 at 1:55 PM, Cloutman, David
wrote:

> I have a quick question for any PHP developers out there.
>
> I am writing a SOA application to manage my library's events calendar.
> The basic idea is to create a public API that our web site or other
> community organizations can use to query and consume information. I am
> using JSON as the default output for information, but would like to add
> the option of outputting native serialized PHP data structures as
> created by the serialized() function.
>
> My question is, what mime type should I use for serialized PHP data? The
> best suggestion I saw through Google was application/vnd.php.serialized,
> which was posted as a proposal. I don't know if any standard was adopted
> though. Has anyone else thought about this issue?
>
> - David
>
>
>
> ---
> David Cloutman 
> Electronic Services Librarian
> Marin County Free Library
>
> Email Disclaimer: http://www.co.marin.ca.us/nav/misc/EmailDisclaimer.cfm
>


Re: [CODE4LIB] BISAC Subject Headings Lookup or Crosswalk

2009-01-21 Thread Andrew Nagy
I saw a great presentation by Jesse Haro from Phoenix Public on their Endeca
catalog.  They had their catalogers go back and recatalog the entire
collection with BISAC headings.  You might want to see if you can get in
touch with him to see if he has any information for you.

http://mlamasslib.blogspot.com/2008/05/endeca-developments-in-opac-world.html

Andrew

On Wed, Jan 21, 2009 at 12:12 PM, Ryan Eby  wrote:

> I was wondering if anyone knows of a good BISAC Subject Headings
> source for looking up a recommended BISAC based on ISBN, LCSH, etc.
> I've found some pages on oclc.org saying they were starting work on
> crosswalks and possibly including them in WorldCat but I haven't seen
> any returned in any WorldCat api calls yet. I've also read that ONIX
> records often have a BISAC code, is there a good source that might
> cover many publishers?
>
> http://www.bisg.org/standards/bisac_subject/index.html
>
> http://www.oclc.org/dewey/updates/numbers/
>
> eby
>


Re: [CODE4LIB] "release management"

2008-11-04 Thread Andrew Nagy
I second the notion for Fogel's book.

From: Code for Libraries [EMAIL PROTECTED] On Behalf Of Randy Metcalfe [EMAIL 
PROTECTED]
Sent: Wednesday, October 29, 2008 10:42 AM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] "release management"

2008/10/29 Jonathan Rochkind <[EMAIL PROTECTED]>:
> Can anyone reccommend any good sources on how to do 'release management' in
> a small distributed open source project. Or in a small in-house not open
> source project, for that matter. The key thing is not something assuming
> you're in a giant company with a QA team, but instead a small project with a
> a few (to dozens) of developers, no dedicated QA team, etc.
>
> Anyone have any good books to reccommend on this?

Karl Fogel's book Producing Open Source Software is an excellent
choice, though it is not solely focused on release management.

http://producingoss.com/

Cheers,

Randy

--
Randy Metcalfe


Re: [CODE4LIB] Open Source Discovery Portal Camp - November 6 - Philadelphia

2008-10-07 Thread Andrew Nagy
I updated the wiki for the conference with a link of nearby hotels that are 
suggested by PALINET.

Here is the link:
http://www.palinet.org/ourorg_directions_hotels.aspx

Andrew

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Eric Lease Morgan
> Sent: Tuesday, October 07, 2008 12:34 PM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] Open Source Discovery Portal Camp - November 6
> - Philadelphia
>
> It looks as if the University of Pennsylvania is having an event on or
> around the same time as the VUFind event, and that is why things are
> filling/full up. FYI. I believe it is better make reservations sooner
> rather than later.
>
> --
> ELM


[CODE4LIB] Open Source Discovery Portal Camp - November 6 - Philadelphia

2008-10-02 Thread Andrew Nagy
Implementing or hacking an Open Source discovery system such as VuFind or 
Blacklight?
Interested in learning more about Lucene/Solr applications?

Join the development teams from VuFind and Blacklight at PALINET in 
Philadelphia, November 6, 2008, for day of discussion and sharing. We hope to 
examine difficult issues in developing discovery systems, such as:

* ILS Connectivity
* Authority Control
* Data Importing
* User Interface Issues

Date and time: November 6, 2008, 9:00am to 4:00pm

Registration Fee: $40 for PALINET members and $50 for PALINET non-members.

For more information and how to register, visit our conference wiki:
http://opensourcediscovery.pbwiki.com


Re: [CODE4LIB] LOC Authority Data

2008-10-01 Thread Andrew Nagy
If only we knew someone who worked in the LOC that we could tell this 
information to

From: Code for Libraries [EMAIL PROTECTED] On Behalf Of Ed Summers [EMAIL 
PROTECTED]
Sent: Monday, September 29, 2008 7:02 PM
To: CODE4LIB@LISTSERV.ND.EDU
Subject: Re: [CODE4LIB] LOC Authority Data

On Mon, Sep 29, 2008 at 6:01 PM, Jonathan Rochkind <[EMAIL PROTECTED]> wrote:
> I thought I remembered something about Casey Bisson doing exactly that with
> a grant/award he received? I forget what happened to it. A snapshot would
> just be a snapshot of course, it wouldn't include records created or
> modified after the snapshot.

That was the bibliographic records which he purchased and donated to
the Internet Archive:

  http://www.archive.org/details/marc_records_scriblio_net

They are also available via a torrent:

  http://torrents.code4lib.org/

It definitely would be nice to do the same thing for the authority
data. It's kind of absurd to me that this data isn't already in the
public domain, since it's uh in the public domain. But what do I know,
I'm not a lawyer.

//Ed


Re: [CODE4LIB] LOC Authority Data

2008-09-29 Thread Andrew Nagy
> Although note that these are only *subject* authorities.
>
> Andrew, I think you may also be looking for name authorities (since I
> assume this inquiry came from a suspiciously topically similar thread
> on vufind-tech).

Yes - I would love to be able to obtain all authority files.

>
> Also, Ed's SKOS data lumps all of the subfields into one string
> literal, so:

Yeah - the marc record has much more data than the rdf file.  I haven't 
explored the indexing process of authority records in detail enough yet to 
determine if this string munging is a problem or not.

Andrew


Re: [CODE4LIB] LOC Authority Data

2008-09-29 Thread Andrew Nagy
I was aware of this data - but I'm really curious if anyone has ever heard of 
or seen a scraping process that is run frequently to get updates.  The data on 
the fred2.0 site is from 2006.  I'd like to try to keep an up to date copy - 
especially since us Americans are "entitled" to "free" access to the data.

Andrew

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Jason Griffey
> Sent: Tuesday, September 23, 2008 5:06 PM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] LOC Authority Data
>
> Simon Spero at UNC did a scrape of the entirety of the LoC Authority
> files in Dec of 2006. They are available at Fred 2.0:
>
> http://www.ibiblio.org/fred2.0/wordpress/?page_id=10
>
> Jason
>
>
> On Tue, Sep 23, 2008 at 4:35 PM, Andrew Nagy
> <[EMAIL PROTECTED]> wrote:
> > Hello - I am curious if anyone knows of a way to access the entire
> collection of authority records from the LOC.  It seems that the only
> way to access them know is one record at a time.  Feel free to email me
> off line if you are uncomfortable posting a response to the list.
> >
> > Thanks
> > Andrew
> >


[CODE4LIB] LOC Authority Data

2008-09-23 Thread Andrew Nagy
Hello - I am curious if anyone knows of a way to access the entire collection 
of authority records from the LOC.  It seems that the only way to access them 
know is one record at a time.  Feel free to email me off line if you are 
uncomfortable posting a response to the list.

Thanks
Andrew


Re: [CODE4LIB] Conference: Access 2008 in Hamilton, ON -- October 1-4.

2008-08-27 Thread Andrew Nagy
This may be a bit too specific or complex for 1 day - but I will throw it out 
there and would be more than happy to lead the event.

This is an idea I kind of formalized today:

Develop an authority control mechanism into vufind (www.vufind.org) that would 
utilize the library of congress authority control data and automatically 
authorize bibliographic records in vufind.

Step 1:  Download and index LOC authority author records
Step 2:  Update all bib records in a vufind instance with authorized forms and 
alternate forms
Step 3:  Delete unused authority records in authority index
Step 4:  Create a script that processes this on a periodic basis (monthly or 
yearly).

Voila - free authority control for the library's catalog (assuming they opt to 
use vufind)

Andrew

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> John Fink
> Sent: Tuesday, August 26, 2008 1:30 PM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: [CODE4LIB] Conference: Access 2008 in Hamilton, ON -- October
> 1-4.
>
> Also folks, I'm still soliciting Access Hackfest ideas -- let me know
> if you
> have any.
>
> ---
>
> Registration is now open for Access 2008, Canada's premier library
> technology conference that focuses on issues relating to technology
> planning, development, challenges and solutions.
>
>
>
> *When*: Oct. 1 - 4, 2008
>
>
>
> *Where*: Hamilton, Ontario
>
>
>
> *How:* Visit the conference website to register:
> http://access2008.blog.lib.mcmaster.ca/registration/
>
>
>
> *What:* Check the conference website for the exciting program! Keynotes
> this
> year will be Karen Schneider and Bob Young!
> http://access2008.blog.lib.mcmaster.ca/
>
>
>
>
>
> This year the conference will be held in Hamilton, Ontario at the
> Sheraton
> Hamilton Hotel (conference) and Hamilton Public Library (Hackfest) from
> October 1-4 and is hosted by:
>
> McMaster University, Hamilton Public Library, Mohawk College & Brock
> University.
>
>
>
> **Reserve your room at the Sheraton by Sept. 5th to secure the
> conference
> rate.**
>
>
>
> Spots are filling up fast - please register soon!
>
>
>
> *Need conference funding?*
>
> You may qualify for a grant! There are two grants available, each worth
> $1000:
>
> ProQuest Student Travel Grant (for students only)
>
> Equinox-Evergreen First-Timer Grant (for first-time Access attendees
> only)
>
>
>
> For more information about these grants and to apply, see the
> conference
> website: http://access2008.blog.lib.mcmaster.ca/travel-grants
>
>
> --
> http://libgrunt.blogspot.com -- library culture and technology.


Re: [CODE4LIB] implementing cool uris in java

2008-07-03 Thread Andrew Nagy
I talked to someone once who did this by creating a dynamic 404 error page 
(assuming for whatever reason you can't use mod_rewrite).  If you are familiar 
with an MVC design pattern, you could make your 404 error page your main 
application controller that directs the traffic.  The problem with this 
(besides being a major hack) is your web logs will be all screwy.

But as others have said, there are plenty of ways to do this.  mod_rewrite++

Andrew

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Emily Lynema
> Sent: Thursday, July 03, 2008 12:22 AM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: [CODE4LIB] implementing cool uris in java
>
> I'm looking around for tools to implement cool uris in java. I've been
> studying the restlet framework tonight, and while it sounds cool, I
> think it would also require a complete re-write of an application that
> is currently based on the Servlet API. And, of course, I'm working
> under
> a time crunch.
>
> Is there anything out there to assist me in working with cool uris
> besides just using regular expressions when parsing URLs?
>
> For example, I'd like to create URLs like:
>
> http://catalog.lib.ncsu.edu/record/123456
>
> instead of:
>
> http://catalog.lib.ncsu.edu/record?id=1234565
>
> -emily
> --
> Emily Lynema
> Systems Librarian for Digital Projects
> Information Technology, NCSU Libraries
> 919-513-8031
> [EMAIL PROTECTED]


Re: [CODE4LIB] III SIP server

2008-06-12 Thread Andrew Nagy
Yes - Please do share!

Here is my vote for an SVN server hosted at code4lib.org

Andrew

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Walker, David
> Sent: Wednesday, June 11, 2008 6:00 PM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] III SIP server
>
> I'd like to see the PHP code, Mark.  Would you mind sending it to me,
> or perhaps posting it somewhere where we all might download it?
>
> Thanks!
>
> --Dave
>
> ---
> David Walker
> Library Web Services Manager
> California State University
> http://xerxes.calstate.edu
>
> 
>
> From: Code for Libraries on behalf of Mark Ellis
> Sent: Wed 6/11/2008 8:42 AM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] III SIP server
>
>
>
> Wayne,
>
> What are you using for a client?  I have some PHP for getting patron
> information, but there's nothing III specific about it, so I don't know
> if it'd be helpful.  Do you have the 3M SIP SDK?
>
> Mark
>
> Mark Ellis
> Manager, Information Technology
> Richmond Public Library
> Richmond, BC
> (604) 231-6410
> www.yourlibrary.ca
>
>
> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Schneider, Wayne
> Sent: Tuesday, June 10, 2008 4:29 AM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: [CODE4LIB] III SIP server
>
> Has anyone out there attempted to code to III's SIP server?  We're new
> to III, having just merged with another library system that is a III
> customer, and were hoping to be able to use SIP for some basic customer
> account information - nothing too fancy, just basically some of what is
> supported in version 2.00 of the protocol.  Name and address would be
> nice (name we seem to get, but no address), items out, items on hold,
> fines and fees, etc.  Our other ILS, SirsiDynix Horizon, has pretty
> good
> support for SIP 2.00 features, only somewhat idiosyncratic, with a few
> fairly well-documented extensions, and we were hoping to find the same
> level of support in III's server.  Is this an entirely unreasonable
> expectation?
>
> wayne
> --
> Wayne Schneider
> ILS System Administrator
> Hennepin County Library
> 952.847.8656
> [EMAIL PROTECTED]


Re: [CODE4LIB] Internet Archive collection codes?

2008-06-04 Thread Andrew Nagy
Excuse me if I am late to the game on this one - but at the Code4Lib conference 
either Brewster Kahle or Aaron Swartz spoke about an API to either the open 
library or the internet archive.  Is this available, or any plans to release 
this?  It seems like you are referring to some sort of API.

Andrew

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> [Alexis Rossi]
> Sent: Tuesday, June 03, 2008 10:58 PM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] Internet Archive collection codes?
>
> Hi,
>
> You can do a search for mediatype:collection to return results for all
> 4200+ collections.
>
> We have a search interface that will return specific fields for this
> query
> in xml format, if you'd like, but I'll need to give you some
> permissions
> to access it.  Feel free to send me an email if you'd like to use that
> ([EMAIL PROTECTED]).
>
> Alexis
>
>
>
>
> > Does anyone know where to get a list of Internet Archive collection
> > codes and their human-displayable display labels?
> >
> > For instance:
> > americana => "American Libraries"
> > gutenberg => "Project Gutenberg"
> > librivoxaudio => [hell if I know]
> >
> >
> > Some of these I can 'scrape' from the quick search box popup on the
> IA
> > website. But their not all in there. And maybe there's a better place
> to
> > get these?
> >
> > Anyone know where the right place to ask this of the IA and/or IA
> > developer community is?
> >
> > Jonathan
> >


Re: [CODE4LIB] how to obtain a sampling of ISBNs

2008-04-28 Thread Andrew Nagy
When playing around with OCLC's XISBN service, I plugged in the isbn number for 
one of the "gone with the wind" books we have at our library - it returned 
something like 150 similar isbn numbers.  You could try doing that for a few 
items.

Just an idea ...

Andrew

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Godmar Back
> Sent: Monday, April 28, 2008 9:35 AM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: [CODE4LIB] how to obtain a sampling of ISBNs
>
> Hi,
>
> for an investigation/study, I'm looking to obtain a representative
> sample set (say a few hundreds) of ISBNs. For instance, the sample
> could represent LoC's holdings (or some other acceptable/meaningful
> population in the library world).
>
> Does anybody have any pointers/ideas on how I might go about this?
>
> Thanks!
>
>  - Godmar


Re: [CODE4LIB] place for code examples?

2008-03-31 Thread Andrew Nagy
> I still think if you want a production machine, though, you shouldn't
> be doing development on there.  If you want to do something with
> DokuWiki put it some other place first and get it like you want it
> there.  Otherwise, I think we're just recreating anvil with all the
> inherent problems that an open/development environment will entail.
> Of course, making that decision can fall to the sys admins if the
> community doesn't have a preference (they'll be the ones who get to
> pick up the pieces anyway).

Can OSU provide a staging machine to test out implementations of things like 
dokuwiki before launching them live?  Perhaps make the code4lib server 
virtualized?

Andrew


Re: [CODE4LIB] place for code examples?

2008-03-31 Thread Andrew Nagy
I think a snippet repository would be a fantastic idea that would fit well 
within the code4lib website.  Dokuwiki would also be a good fit for this and 
would allow people to share the "oai harvester in under 50 lines", etc.

snippet.code4lib.org++

Andrew

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Jonathan Rochkind
> Sent: Monday, March 31, 2008 11:36 AM
> To: CODE4LIB@LISTSERV.ND.EDU
> Subject: Re: [CODE4LIB] place for code examples?
>
> I don't know if it's the best solution, but you could use the code4lib
> wiki if you like. wiki.code4lib.org.  Won't have code formatting or
> anything like that.
>
> Incidentally, I'm interested in getting a DokuWiki installation going
> for code4lib, which I think will serve our needs somewhat better than
> the current MediaWiki.  But that goes back to the thread I introduced
> which died about how to grant shell access to code4libbers on the OSU
> hosted code4lib.org.  Everyone seemed to agree that one or two or three
> code4libbers were neccesary to accept responsibility as "app admin
> coordinator" on the machine, but nobody actually volunteered to do
> that,
> so we're a bit stuck.  If we had a process/structure in place, and
> there
> was an app you wanted installed on code4lib.org to do this, there might
> be a way to do that---depending on what process/structure we come up
> with. But without one...
>
> Jonathan
>
> Keith Jenkins wrote:
> > Does there already exist some place to put some code examples to
> share
> > with the code4lib community?  (I'm thinking of snippets somewhere on
> > the order of 10-100 lines, like the definition of a php function.)
> >
> > Keith
> >
> >
>
> --
> Jonathan Rochkind
> Digital Services Software Engineer
> The Sheridan Libraries
> Johns Hopkins University
> 410.516.8886
> rochkind (at) jhu.edu


[CODE4LIB] VuFind 0.8 Release

2008-03-18 Thread Andrew Nagy
Excuse the Cross Posting

Hello All - I am pleased to announce the latest release of VuFind - the open 
source library resource discovery platform.  Version 0.8 Beta is now available 
for download - you can access the download link from 
http://vufind.org/downloads.php or from http://sourceforge.net/projects/vufind.

The major enhancement in version 0.8 is our new MARC import tool developed by 
Wayne Graham.  This should help improve any issues dealing with importing 
records as well as a speed enhancement.

If you are interested in trying our vufind - have a look at our live demo: 
http://vufind.org/demo

Or feel free to join our mailing list: 
https://lists.sourceforge.net/mailman/listinfo/vufind-general

Enjoy!
Andrew Nagy


Re: [CODE4LIB] Planning open source Library system at Duke

2008-01-28 Thread Andrew Nagy
> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Nathan Vack
>
> Isn't there already an extant open-source ILS that's out there, and
> reputed to be rather good?
>
> I'm all for parallel approaches to problems... but the world of ILSes
> is pretty small. Maybe use fat cash from Mellon to help bake
> Evergreen the rest of the way?
>

Hear Hear!

Im sure our library would love to be apart of a grant where large sums of money 
get thrown at some of the existing open source ILSs to further the development 
in the areas that academic libraries need.  The last thing the library 
community needs is yet another planning group to analyze the next generation 
catalog or to survey the libraries to determine if they are happy or not.  I 
think Marshall Breeding and others' survey results are conclusive enough.  We 
all know we need something better - let's start working on it!

Andrew


> On Jan 28, 2008, at 4:26 PM, John Little wrote:
>
> > Code4Lib:
> >
> > The Duke University Libraries are preparing a proposal for the Mellon
> > Foundation to convene the academic library community to design an
> open
> > source Integrated Library System (ILS).  We are not focused on
> > developing an
> > actual system at this stage, but rather blue-skying on the elements
> > that
> > academic libraries need in such a system and creating a blueprint.
> > Right
> > now, we are trying to spread the word about this project and find
> > out if
> > others are interested in the idea.


Re: [CODE4LIB] Code4Lib http irc channel

2008-01-07 Thread Andrew Nagy
> I'm not sure who manages linuxinlibraries.com, but it's not directly
> related to code4lib.  Perhaps it's time for us to run an IRC cgi
> client on a code4lib server?

This would be excellent - I have been battling my campus IT dept for years to 
allow my work computer to access IRC with absolutely no luck!

Andrew


Re: [CODE4LIB] [Fwd: z39.50 holdings schema]

2007-12-17 Thread Andrew Nagy
> It is also my understanding that while the Voyager NCIP API supports
> their ILL product, it was not meant to serve as a general purpose NCIP
> API.  I believe that that accounts for the lack of (customer)
> documentation.  Back in March of 2004, the then Endeavor Voyager
> Product Manager discussed their plans for further development of
> Voyager's NCIP API, and I don't think things have changed much since
> then [1].  If you've heard (or know) different, please let us (Voyager
> customers) know.  I've had my eye on NCIP as an API for quite some
> time.

Michael - thanks for the feedback.  I agree with everyone else that NCIP is not 
the killer app with ILS interoperability - but it's the closest thing we have 
at this moment.  What I am invisioning with VuFind is a base class that does 
NCIP functionality and then specialized classes for each ILS that tweaks the 
NCIP messages.

>From what I have heard - Voyager 7 is supposed to have a much fuller NCIP 
>implementation and I believe the same story for SirsiDynix.  But these are 
>just that - stories.  Also I believe both Evergreen and Koha have NCIP as well.

Andrew


Re: [CODE4LIB] [Fwd: z39.50 holdings schema]

2007-12-17 Thread Andrew Nagy
> But this part is
> what I, as a developer writing discovery systems, need most and fail to
> get from current systems.

Exactly!

This is the reason I have been investiagating NCIP - since it is already 
implemented (currently in limited stages) in my ILS as well as open source ILSs 
such as Evergreen have it as well.

There are many standards that would work - but NCIP seems to be the only thing 
practical at the current moment.  I am also a newbie to NCIP and my ILS vendor 
just told me that they don't have any documentation on the NCIP server.

Where is Roy and his manifesto when you need him!

Andrew


Re: [CODE4LIB] z39.50 holdings schema

2007-12-17 Thread Andrew Nagy
Emily - we are investingating NCIP quite a bit here for use with VuFind.  Maybe 
this might be an appropriate standard to standardize on?

Take care,
Andrew

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Emily Lynema
> Sent: Monday, December 17, 2007 9:42 AM
> To: CODE4LIB@listserv.nd.edu
> Subject: [CODE4LIB] z39.50 holdings schema
>
> Anybody in this group have any experience using / implementing the
> z39.50 holdings schema?
>
> http://www.loc.gov/z3950/agency/defns/holdings1-4.html
>
> As part of the DLF ILS Discovery Interface Task Force, we are looking
> for a good schema to define holdings and item-related information (such
> as circulation status). While MARCXML is always an option for MARC
> holdings, I have the sense (aka, I know) that not all institutions /
> ILSs create MARC holdings for all records. So it would be nice to have
> a
> schema into which it would be easy to translate either a MARC holdings
> record or just local holdings stored in some other way + circulation
> information.
>
> The rumor on the street is that z39.50 holdings schema is too complex
> and has never really been used. Anyone want to confirm or deny?
>
> I'm also interested in the up and coming ISO Holdings Schema (ISO
> 20775)
> that it sounds like has been motivated along by OCLC-PICA. But I don't
> have much information on that, so I'd be interested in hearing from
> anyone who knows more about that one, as well.
>
>
> Thanks,
> -emily
> --
> Emily Lynema
> Systems Librarian for Digital Projects
> Information Technology, NCSU Libraries
> 919-513-8031
> [EMAIL PROTECTED]


[CODE4LIB] Voting for Code4Lib 2008 Prepared Talks

2007-12-10 Thread Andrew Nagy
There was some minor miscommunication with the voting system and the initial 
link sent out was to a test instance of the voting system.

Please use the following URL for voting on the talks for the conference talks - 
the voting will be officially open as of December 11th:

http://dilettantes.code4lib.org:8080/election/index/2


For those of you who voted using the initial link sent out by Mike Giarlo - you 
will need to vote again by using the new link.
Keep in mind that there is enough room for roughly 17 talks and you can only 
vote once!

Andrew


Re: [CODE4LIB] Vote on code4lib 2008 talk proposals!

2007-12-10 Thread Andrew Nagy
It might be also worthy to note that based on the current draft schedule there 
are roughly 17 spot for talks.  This might have an effect on voting.  Is the 17 
spots set in stone or will that change based on the outcome?

Andrew

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Michael J. Giarlo
> Sent: Monday, December 10, 2007 2:06 PM
> To: CODE4LIB@listserv.nd.edu
> Subject: Re: [CODE4LIB] Vote on code4lib 2008 talk proposals!
>
> On Dec 10, 2007 1:14 PM, Ranti Junus <[EMAIL PROTECTED]> wrote:
> >
> > This is my first time.  When the instruction said "choose the score
> > you wish to assign from 0-3", does "0" mean "do not want" and "3"
> mean
> > "defintely want"?  Just want to be sure.
> >
>
> Thanks for asking, Ranti; we did not make that clear.
>
> Your hunch is correct.  "0" means "do not want" and "3" means "WANT
> WANT WANT."  We have not done ranking in years past, so this is
> something of an experiment for us.
>
> -Mike


Re: [CODE4LIB] open source chat bots?

2007-12-03 Thread Andrew Nagy
Karen, we are building out a custom chat reference system with our new website 
redesign based on jabber.  Basically you will see all of the reference 
librarians who are logged in to the jabber server with a little picture/avatar 
along with their specialty areas.  The question is - who becomes the "catch 
all" - general reference librarian.  So we wanted to experiment with a chat bot 
and a reference script one of our reference librarians wrote up.  So if the 
student is totally clueless and doesn't know which librarian to pick - they can 
chat with a chat bot  or maybe we will hire Ms. Dewey!  Dunno if it will 
work out well - but something we want to play around with.  Then we could hook 
it up to our libstats implementation and automatically record all transactions. 
 An idea that we are just experimenting with at this stage.  I'll let you know 
when/if I get something up and running.

Andrew

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> K.G. Schneider
> Sent: Monday, December 03, 2007 12:18 PM
> To: CODE4LIB@listserv.nd.edu
> Subject: Re: [CODE4LIB] open source chat bots?
>
> On Mon, 3 Dec 2007 10:14:29 -0500, "Andrew Nagy"
> <[EMAIL PROTECTED]> said:
> > Hello - there was quite a bit of talk about chat bots a year or 2
> back.
> > I was wondering if anyone knew of an open source chat bot that works
> with
> > jabber?
> >
> > Thanks
> > Andrew
>
> I'm afraid this isn't an answer, but several times last week I almost
> posted a similar query to DIG_REF. I'm interested in this response and
> in any responses that would lead to a discussion of an OSS virtual
> reference solution with critical-path VR components such as multiple
> logins, statistics, transcripts, etc.
>
> Karen G. Schneider
> [EMAIL PROTECTED]


Re: [CODE4LIB] open source chat bots?

2007-12-03 Thread Andrew Nagy
> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Wayne Graham
> Sent: Monday, December 03, 2007 12:47 PM
> To: CODE4LIB@listserv.nd.edu
> Subject: Re: [CODE4LIB] open source chat bots?
>
> Andrew,
>
> Not sure if this is what you're looking for, but in ColdFusion 7,

Stop right there, did you say coldfusion?  I think I just threw up in my mouth 
a little. :)

I would rather something available in java, c, c#, perl, php, etc.
I was thinking about making my own - but I have too much on my plate as is so I 
am looking to hack something in the open source market.

Thanks
Andrew


[CODE4LIB] open source chat bots?

2007-12-03 Thread Andrew Nagy
Hello - there was quite a bit of talk about chat bots a year or 2 back.  I was 
wondering if anyone knew of an open source chat bot that works with jabber?

Thanks
Andrew


Re: [CODE4LIB] httpRequest javascript.... grrr

2007-11-29 Thread Andrew Nagy
Don't leave out the Yahoo YUI library as something to consider.  Whats nice is 
that you don't have to load the entire library as one big huge js file - you 
can pick and choose what libraries you want to include in your page minimizing 
the javascript filesize.  If you want to have one little js widget on you page 
- the browser doesn't need to download and process a 150kb prototype js file.

Andrew

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Jonathan Rochkind
> Sent: Thursday, November 29, 2007 10:24 AM
> To: CODE4LIB@listserv.nd.edu
> Subject: Re: [CODE4LIB] httpRequest javascript grrr
>
> These days I think jquery seems more generally popular than prototype.
> But both are options. I definitely would use one or the other, instead
> of doing it myself from scratch. They take care of a lot of weird
> cross-browser-compatibility stuff, among other conveniences.
>
> Jonathan
>
> Jesse Prabawa wrote:
> > Hi Eric,
> >
> > Have you considered using a Javascript Library to handle these
> details? I
> > would recommend that you refactor your code to use one so that you
> can
> > concentrate on what you actually want to do instead. This way you can
> also
> > avoid having browser incompatabilities that are already solved if you
> use a
> > Javascript Library. Try checking out Prototype at
> > http://www.prototypejs.org/
> >
> > Best regards,
> >
> > Jesse
> >
> > On Nov 29, 2007 10:21 PM, Eric Lease Morgan <[EMAIL PROTECTED]> wrote:
> >
> >
> >> Why doesn't my httpRequest Javascript function return unless I add
> an
> >> alert? Grrr.
> >>
> >> I am writing my first AJAX-y function called add_tag. This is how it
> >> is suppose to work:
> >>
> >>   1. define a username
> >>   2. create an httpRequest object
> >>   3. define what it is suppose to happen when it gets a response
> >>   4. open a connection to the server
> >>   5. send the request
> >>
> >> When the response it is complete is simply echos the username. I
> know
> >> the remote CGI script works because the following URL works
> correctly:
> >>
> >>   http://mylibrary.library.nd.edu/demos/tagging/?
> >> cmd=add_tag&username=fkilgour
> >>
> >> My Javascript is below, and it works IF I retain the "alert
> >> ( 'Grrr!' )" line. Once I take the alert out of the picture I get a
> >> Javascript error "xmldoc has no properties". Here's my code:
> >>
> >>
> >>   function add_tag() {
> >>
> >>// define username
> >>var username  = 'fkilgour';
> >>
> >>// create an httpRequest
> >>var httpRequest;
> >>if ( window.XMLHttpRequest ) { httpRequest = new
> XMLHttpRequest(); }
> >>else if ( window.ActiveXObject ) { httpRequest = new
> ActiveXObject
> >> ( "Microsoft.XMLHTTP" ); }
> >>
> >>// give the httpRequest some characteristics and send it off
> >>httpRequest.onreadystatechange = function() {
> >>
> >> if ( httpRequest.readyState == 4 ) {
> >>
> >>  var xmldoc = httpRequest.responseXML;
> >>  var root_node = xmldoc.getElementsByTagName( 'root' ).item( 0
> );
> >>  alert ( root_node.firstChild.data );
> >>
> >> }
> >>
> >>};
> >>
> >>httpRequest.open( 'GET', './index.cgi?cmd=add_tag&username=' +
> >> username, true );
> >>httpRequest.send( '' );
> >>alert ( 'Grrr!' );
> >>
> >>   }
> >>
> >>
> >> What am I doing wrong? Why do I seem to need a pause at the end of
> my
> >> add_tag function? I know the anonymous function -- function() -- is
> >> getting executed because I can insert other httpRequest.readyState
> >> checks into the function and they return. Grrr.
> >>
> >> --
> >> Eric Lease Morgan
> >> University Libraries of Notre Dame
> >>
> >> (574) 631-8604
> >>
> >>
> >
> >
>
> --
> Jonathan Rochkind
> Digital Services Software Engineer
> The Sheridan Libraries
> Johns Hopkins University
> 410.516.8886
> rochkind (at) jhu.edu


Re: [CODE4LIB] httpRequest javascript.... grrr

2007-11-29 Thread Andrew Nagy
Eric - Have a look at some of the ajax functions I wronte for VuFind - there 
are some almost identical function calls that work just fine.
http://vufind.svn.sourceforge.net/viewvc/*checkout*/vufind/web/services/Record/ajax.js?revision=106
See function SaveTag

Also - You might want to consider using the Yahoo YUI Connection Manager or the 
Prototype AJAX toolkit.  They both work great and you don't need to spend time 
debugging.  I also find firebug (firefox plugin) to be an awesome ajax debugger.

Just by looking at your function real quick - you are calling 
httpRequest.send('') at the end of your function.  I think I read somewhere 
that you should send null and not an empty string.  Maybe that will solve it?  
Not really sure.


Andrew

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Eric Lease Morgan
> Sent: Thursday, November 29, 2007 9:22 AM
> To: CODE4LIB@listserv.nd.edu
> Subject: [CODE4LIB] httpRequest javascript grrr
>
> Why doesn't my httpRequest Javascript function return unless I add an
> alert? Grrr.
>
> I am writing my first AJAX-y function called add_tag. This is how it
> is suppose to work:
>
>1. define a username
>2. create an httpRequest object
>3. define what it is suppose to happen when it gets a response
>4. open a connection to the server
>5. send the request
>
> When the response it is complete is simply echos the username. I know
> the remote CGI script works because the following URL works correctly:
>
>http://mylibrary.library.nd.edu/demos/tagging/?
> cmd=add_tag&username=fkilgour
>
> My Javascript is below, and it works IF I retain the "alert
> ( 'Grrr!' )" line. Once I take the alert out of the picture I get a
> Javascript error "xmldoc has no properties". Here's my code:
>
>
>function add_tag() {
>
> // define username
> var username  = 'fkilgour';
>
> // create an httpRequest
> var httpRequest;
> if ( window.XMLHttpRequest ) { httpRequest = new XMLHttpRequest();
> }
> else if ( window.ActiveXObject ) { httpRequest = new ActiveXObject
> ( "Microsoft.XMLHTTP" ); }
>
> // give the httpRequest some characteristics and send it off
> httpRequest.onreadystatechange = function() {
>
>  if ( httpRequest.readyState == 4 ) {
>
>   var xmldoc = httpRequest.responseXML;
>   var root_node = xmldoc.getElementsByTagName( 'root' ).item( 0 );
>   alert ( root_node.firstChild.data );
>
>  }
>
> };
>
> httpRequest.open( 'GET', './index.cgi?cmd=add_tag&username=' +
> username, true );
> httpRequest.send( '' );
> alert ( 'Grrr!' );
>
>}
>
>
> What am I doing wrong? Why do I seem to need a pause at the end of my
> add_tag function? I know the anonymous function -- function() -- is
> getting executed because I can insert other httpRequest.readyState
> checks into the function and they return. Grrr.
>
> --
> Eric Lease Morgan
> University Libraries of Notre Dame
>
> (574) 631-8604


[CODE4LIB] Access 2007 summary

2007-11-28 Thread Andrew Nagy
Does anyone know of or have an in-depth review of the access 2007 conference.  
Was there video captured?  I was unable to attend - but wanted to check it out 
this year.

Thanks
Andrew


[CODE4LIB] Position: Programmer at Villanova University Library

2007-11-06 Thread Andrew Nagy
Library Software Development Specialist
Falvey Library, Villanova University

This position reports to the Technology Management Team and is responsible for 
designing, developing, testing and deploying new technology methods, tools and 
resources to extend and enhance digitally-mediated or digitally-delivered 
library services, including but not limited to, Web interfaces, digital 
reference and research assistance, digitization and digital library 
development, institutional repository services, "portalization" and 
personalization of library resources, the integration of handheld devices into 
the library service environment, Web content management, collaboration 
software, staff Intranet services, online knowledge base development, and 
related areas.  This person will also serve as trainer and mentor to librarians 
and other library staff involved in new technology initiatives, with an 
emphasis on skill transfer, skill development, and the expansion of the 
library's technology base in support of continuously improving digital services 
for library users.

Requirements include:  Bachelor's degree in computer science, information 
systems or a related field required; 1 year of professional experience 
developing and implementing technology projects in a collaborative, team-based, 
goal-oriented environment; ability to work independently on programming and 
technology implementation projects; ability to listen to and act upon the needs 
and suggestions of others, in support of user-oriented systems design and 
development; excellent analytical skills to support problem solving, systems 
analysis, software functional specification, and debugging; ability to juggle 
multiple competing priorities; excellent writing skills for the preparation of 
clear, user-oriented documentation; capacity for higher-level strategic 
analysis of technology trends; working knowledge of PC and Unix-based computing 
platforms and operating systems; working knowledge of web development tools and 
technologies, including PHP, ASP, .Net, Java, HTML and CSS, AJAX, XML, XSLT and 
XQuery; working knowledge of Unix server administration and related scripting 
languages; working knowledge of SQL, database systems, and basic principles of 
database design.

You may email resumes, but please include a cover letter, resume and references 
in only one attachment.  Please submit resumes to [EMAIL 
PROTECTED], or fax to (610) 519-6667.  Please send 
only one resume.

For further information, call Barbara Kearns at ext. 9-4235 or the Villanova 
Job Hotline at (610) 519-5900


Re: [CODE4LIB] Libstats is looking for project leaders

2007-10-26 Thread Andrew Nagy
Nate, we use LibStats religiously here.  I would be interested in joining the 
community - but similiarly to you, I don't have much time to spare.

Andrew

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Nathan Vack
> Sent: Friday, October 26, 2007 12:42 PM
> To: CODE4LIB@listserv.nd.edu
> Subject: [CODE4LIB] Libstats is looking for project leaders
>
> Hi all,
>
> I was recently involved in a discussion about the mechanics of
> running an open-source project over at Library Web Chic, and I've
> come to the conclusion that for a project to succeed, it really needs
> to have at least a small, dedicated community. A community of one is
> no community at all ;-)
>
> For the last few years, I've been in charge of running Libstats, a
> small, GPL'd reference statistics tracking / knowledgebase project.
> For a variety of reasons*, I'm unlikely to have a significant amount
> of time to devote to the project ever again... and there are a lot of
> things that could use improvement, ranging from squashing bugs to
> improving documentation to adding features to answering support
> questions.
>
> So... here's my call for volunteers. This project is quite small
> (<6400 LoC), PHP / MySQL-based, and seems to work pretty well for the
> majority of its users -- it'd be a great place for someone new to
> open-source project management to learn the ropes. I'd especially
> like someone outside our university to have some ownership of the
> project.
>
> Interested? Head over to http://groups.google.com/group/libstats --
> that's where the party's at.
>
> Cheers,
> -Nate Vack
> Wendt Library
> University of Wisconsin - Madison
>
> * Full disclosure: I'm also working on a hosted, closed-source
> competitor to this project... so for me to stay solely in charge of
> Libstats would be conflict-of-interest-central. That's not my only
> reason, but it's a big one.


Re: [CODE4LIB] LC class scheme in XML or spreadsheet?

2007-09-25 Thread Andrew Nagy
This topic came up a few weeks ago on code4lib too, where were you Ed!? :)

I will echo something that Roy mentioned in the thread from a few weeks back, 
would the LOC be willing to create a web service where you could supply a call 
number and it would return the heirarchy of topic areas for that number?

Thanks
Andrew

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Ed Summers
> Sent: Monday, September 24, 2007 8:19 PM
> To: CODE4LIB@listserv.nd.edu
> Subject: Re: [CODE4LIB] LC class scheme in XML or spreadsheet?
>
> It's funny this subject just came up on one of the open-library
> discussion lists this week [1]. A whiles ago now Rob Sanderson, Brian
> Rhea (University of Liverpool) and I pulled down the LC Classification
> Outline pdf files, converted them to text, wrote a python munger to
> convert the text into what ended up being a SKOS RDF file. We made the
> code available [2] and you can see the resulting SKOS (which needs
> some URI work) [3].
>
> It's kind of a work in progress (still). I wanted to get to the point
> that the rdf file was leveraged in a little python library (possibly
> as a pickled data structure) for easily validating LC numbers and
> looking them up in the outline.
>
> I'd be interested in any feedback.
>
> //Ed
>
> [1] http://mail.archive.org/pipermail/ol-lib/2007-September/69.html
> [2] http://inkdroid.org/svn/lcco-skos/trunk/rdfizer/
> [3] http://inkdroid.org/tmp/lcco.rdf


Re: [CODE4LIB] LCC classifications in XML

2007-08-28 Thread Andrew Nagy
Yes Please, Is Ed listening in?

Thanks
Andrew

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Jonathan Brinley
> Sent: Tuesday, August 28, 2007 3:36 PM
> To: CODE4LIB@listserv.nd.edu
> Subject: Re: [CODE4LIB] LCC classifications in XML
>
> Not long ago, I recall Ed Summers sharing the classification outline
> in RDF. I may still have a copy of that around if you're interest.
>
> Have a nice day,
> Jonathan
>
>
> > On 8/28/07 12:16 PM, "Andrew Nagy" <[EMAIL PROTECTED]> wrote:
> >
> > > Does anyone know of a place where the LCC Callnumber
> classifications can be
> > > found in a "parseable" format such as XML?
> > >
>
>
> --
> Jonathan M. Brinley
>
> [EMAIL PROTECTED]
> http://xplus3.net/


[CODE4LIB] LCC classifications in XML

2007-08-28 Thread Andrew Nagy
Does anyone know of a place where the LCC Callnumber classifications can be 
found in a "parseable" format such as XML?

Thanks
Andrew


Re: [CODE4LIB] code.code4lib.org

2007-08-13 Thread Andrew Nagy
> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Will Kurt
>
> One of the things that's really lacking in the library community is
> something like a sourceforge.net to serve as a central repository for
> all opensource library projects and this certainly sounds like a step
> in the right direction (maybe there already is such a thing and I
> don't know about it).  I'm sure many people out there have at least
> snippets of code or various libraries that they might not know where
> to publish or are already publishing but other people don't know
> where to find them.
>

I totally agree.  I had always wished to have a place on code4lib for people to 
share snippets of code.  A marc library, or an xslt doc, etc.

The code that runs the pear.php.net repository site is open source.  I think it 
would be neat to have a code repository like pear/cpan where we can all share 
code snippets and documentation for the code.

Andrew


Re: [CODE4LIB] parse an OAI-PHM response

2007-07-30 Thread Andrew Nagy
Andrew, I began building a PHP OAI Client library based on a OAI Server library 
that I wrote a while back.  The OAI Client library is not complete, but it can 
get you started.  I attached it in a file called Harvester.php

Andrew

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Andrew Hankinson
> Sent: Friday, July 27, 2007 9:32 PM
> To: CODE4LIB@listserv.nd.edu
> Subject: [CODE4LIB] parse an OAI-PHM response
>
> Hi folks,
> I'm wanting to implement a PHP parser for an OAI-PMH response from our
> Dspace installation.  I'm a bit stuck on one point: how do I get the
> PHP
> script to send a request to the OAI-PMH server, and get the XML
> response in
> return so I can then parse it?
>
> Any thoughts or pointers would be appreciated!
>
> Andrew


Harvester.php
Description: Harvester.php


Re: [CODE4LIB] code4lib.org hosting

2007-07-30 Thread Andrew Nagy
In case I can't make the conversation, I must suggest Bastille - a linux 
package that does firewalling and IP Masquerading.  I have been using it for 
about 8 years now and have never had a hacked linux box running it.

I even had my ISP kill my network connection once because my server was being 
attacked by thousands of machines and never once got through and the machine 
never experienced any performance degredation.

http://www.bastille-linux.org/

Good luck
Andrew

> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Ed Summers
> Sent: Friday, July 27, 2007 5:18 PM
> To: CODE4LIB@listserv.nd.edu
> Subject: [CODE4LIB] code4lib.org hosting
>
> As you may have seen or experienced code4lib.org is down for the count
> at the moment because of some hackers^w crackers who compromised anvil
> and defaced various web content and otherwise messed with the
> operating system. anvil is a machine that several people in the
> code4lib community run and pay for themselves.
>
> Given that code4lib has grown into a serious little gathering, with
> lots of effort being expended by the likes of Jeremy Frumkin and Brad
> LaJenuesse to make things happen -- it seems a shame to let this sort
> of thing happen. We don't have any evidence, but it seems that the
> entry point was the fact that various software packages weren't kept
> up to date.
>
> Anyhow, this is a long way of inviting you to a discussion Aug 1st
> @7PM GMT in irc://chat.freenode.net/code4lib to see what steps need to
> be taken to help prevent this from happening in the future.
> Specifically we're going to be talking about moving some of the web
> applications to institutions that are better set up to manage them.
>
> If this interests you at all try to attend!
>
> //Ed


Re: [CODE4LIB] Open Source OPAC - VUFind Beta Released

2007-07-20 Thread Andrew Nagy
> -Original Message-
> Alexander Johannesen
>
> Excellent stuff, and thanks for the open-source effort.
>
> Three things ;
>
> 1. Will there be efforts towards a development community outside your
> library?

Completely.  I just imported all of the code to the Sourceforge SVN Server.  It 
is now completely in the hands of the community.  Our goals with open-sourcing 
the code is to gain support and help from other libraries to help in the making 
of a high quality "resource discovery system".  While Villanova University is 
completely committed to this software, other universities I would hope will 
too.  Together we can collaborate in the development and functionality.  We 
already see this with applications such as DSpace.

>
> 2. http://www.vufind.org/demo/Record/56179 has serious problems in its
> "similar items" section. :)

Yeah, yeah, yeah :)  We will get it better.  This still needs some tweaking.  
In some cases it works really well, but with other cases it works really poorly.

>
> 3. If you scroll down a list of things and then do something that
> requires a login, only the top part of the page that's not in view has
> the action. The user sees nothing, and nothing happens.

I noticed this yesterday afternoon as well.  I think that ajax-y login was one 
of the last features to be developed.  We still have a bit of work to do on the 
ajax UI stuff.

>
> Apart from that, great stuff and, if you accept such, I'd love to
> participate in ways that I can.

Please do!  I have also setup 2 mailing lists on sourceforge (Sorry Gabe).  One 
for general use and one to manage patches, etc.

Thanks
Andrew


[CODE4LIB] Open Source OPAC - VUFind Beta Released

2007-07-19 Thread Andrew Nagy
Sorry for cross-posting

Hello All,

I am please to announce the release of our Next-Gen library catalog browser, 
VuFind.  It is now officially open source code under the GPL and hosted on 
Sourceforge.  We have been working on the application for quite some time now, 
almost a year, and for the past few months have been working with some local 
schools to test the application and begin to build some install scripts.

Currently out of the box the software works with the Voyager catalog, but we 
are working on adding additional drivers to work with your favorite ILS!  Even 
Evergreen and Koha!  (If you would like to volunteer to build an ILS Driver, we 
would highly appreciate the contribution.)

Some of you may recall my presentation at Code4Lib 2007 on our initial efforts 
of the software using a Native XML Database.  Now using the power of Apache 
Solr, the speeds are astonishing and as we all know, Apache Solr really does 
rock!

Please have a look at our project website to download the software and you can 
even try out a live demo of the software:
http://www.vufind.org/

We are currently in a beta stage of development with the software but hope to 
have all necessary functionality completed by the end of the summer and have a 
stable production release by the Fall semester.  Please feel free to sign up 
for the mailing list to let us know any thoughts you have on the project and 
please report any bugs you encounter in testing, etc.

Enjoy!
Andrew


Re: [CODE4LIB] marc2oai

2007-05-29 Thread Andrew Nagy
> -Original Message-
> From: Code for Libraries [mailto:[EMAIL PROTECTED] On Behalf Of
> Eric Lease Morgan
> Sent: Tuesday, May 29, 2007 1:53 PM
> To: CODE4LIB@listserv.nd.edu
> Subject: [CODE4LIB] marc2oai
>
> Does anybody here know of a MARC2OAI program?
>

Eric, I have a small script that does this, it is fairly quite simple.  
Probably about 100 lines of code or so.

I have a nightly cron script that gets any new/modified marc records from the 
past 24 hours out of the catalog and then runs marc2xml on the dump file.  Then 
I have a small script that breaks up the large marcxml files into individual 
xml files and imports them into SOLR!  I then can use an XSL stylesheet such as 
the LOC's marc2oai to produce an OAI document or the marc2rdf, etc on the full 
marcxml files (since solr doesn't have the original record).  I have yet to 
incorporate my OAI server code into this, but since it is already written, it 
would be a fairly easy merge.

This is all built into my NextGen OPAC that I am working on and hope to 
open-source sometime this summer.  So sorry, im not allowed to hand out the 
code just yet :(

Thanks
Andrew


[CODE4LIB] Posting Presenations

2007-03-07 Thread Andrew Nagy

I am still having difficulty posting my presentation to the C4L
website.  I am getting an error about my file not being authorized or
something to that extent.  I did not try last night, but I will try
again tonight.

Has anyone checked to make sure that this is working?

Andrew


Re: [CODE4LIB] Preconference

2007-02-22 Thread Andrew Nagy

You can find my schema file to match the XSLT doc at:
http://library.villanova.edu/technical/SolrSchema.xml

Enjoy,
Andrew

Emily Lynema wrote:

Hi Andrew,

I was thinking about using your marcxml2solr.xsl to quickly transform
my marcxml to solr input for testing. Do you have a solr schema file
as well that could be used to jumpstart the system?

Thanks!
-emily

Andrew Nagy wrote:

Andrew Nagy wrote:


I have an XSLT doc for transforming MARCXML to SOLR XML that I can
share around.


I was asked if I could post my XSLT doc, so here it is!

It is probably somewhat geared toward my collection of data and I had
some custom scripting for determining the format more accurately but I
removed it to for compatibility reasons.  This will give you a chance to
play with some data before the preconference.

http://library.villanova.edu/technical/marcxml2solr.xsl

Enjoy!
Andrew




Re: [CODE4LIB] Preconference

2007-02-13 Thread Andrew Nagy

Andrew Nagy wrote:

I have an XSLT doc for transforming MARCXML to SOLR XML that I can
share around.

I was asked if I could post my XSLT doc, so here it is!

It is probably somewhat geared toward my collection of data and I had
some custom scripting for determining the format more accurately but I
removed it to for compatibility reasons.  This will give you a chance to
play with some data before the preconference.

http://library.villanova.edu/technical/marcxml2solr.xsl

Enjoy!
Andrew


Re: [CODE4LIB] Preconference

2007-02-13 Thread Andrew Nagy

I have an XSLT doc for transforming MARCXML to SOLR XML that I can share
around.

Andrew

Jonathan Rochkind wrote:

If we bring MARCXML and/or MODS, can we assume that there will be people
who can help us process that data into something useable by Solr?  That
would be a nice, at any rate.

Jonathan

Erik Hatcher wrote:

On Feb 13, 2007, at 9:47 AM, Susan E Teague Rector/FS/VCU wrote:

Are we supposed to be using a predefined set of data for the
preconference
or can we use our own data?


Susan - I'm going to package up a lot of stuff (Solr, sample
datasets, Luke, etc) to help everyone get started, but bringing your
own data is encouraged as long as you also bring along the necessary
tools and know-how to process that data into something usable by Solr
(either XSLT to .xml files, or via code that speaks to Solr directly).

So by all means bring your data.

   Erik



--
Jonathan Rochkind
Sr. Programmer/Analyst
The Sheridan Libraries
Johns Hopkins University
410.516.8886
rochkind (at) jhu.edu


Re: [CODE4LIB] Very large file uploads, PHP or possibly Perl

2007-02-09 Thread Andrew Nagy

I have done large file uploads in PHP.  Make sure you have the following
set in php.ini:

upload_max_filesize = 
file_uploads = on
post_max_size = 

Also, you can set these values through the set_ini function in PHP so
that it can be per script instead of effective for every script which
can allow for a more granular level of control for security reasons, etc.

I have never used the form input value, nor should you have to change
the memory_limit very much since the file itself is not loaded into
memory, just information regarding the file.

Andrew

Thomas Dowling wrote:

I have always depended on the kindness of strange PHP gurus.

I am trying to rewrite a perpetually buggy system for uploading large
PDF files (up to multiple tens of megabytes) via a web form.  File
uploads are very simple in PHP, but there's a default maximum file size
of 2MB.  Following various online hints I've found, I've gone into
php.ini and goosed up the memory_limit, post_max_size, and
upload_max_size (and restarted Apache), and added an appropriate hidden
form input named MAX_FILE_SIZE.  The 2MB limit is still in place.

Is there something I overlooked?  Or, any other suggestions for how to
take in a very large file?

[My current Perl version has a history of getting incomplete files in a
non-negligible percentage of uploads.  Weirdness ensues: whenever this
happens, the file reliably cuts off at the same point, but the cutoff is
not a fixed number of bytes, nor is it related to the size of the file.]


--
Thomas Dowling
[EMAIL PROTECTED]



Re: [CODE4LIB] Solr indexing -- why XSL is the wrong choice

2007-01-19 Thread Andrew Nagy

Casey, we have had great successes with XSL for MARCXML to SOLR, so I
can't agree to everything you are saying.  However I anxiously await
your presentation on your successes with SOLR so you can persuade me to
the dark side :)

Casey Durfee wrote:


I think there are many good reasons why XSLT is absolutely the wrong tool for 
the job of indexing MARC records for Solr.

1) Performance/Speed: In my experience even just transforming from MARCXML to 
MODS takes a second or two (using the LoC stylesheet), due to the stylesheet's 
complexity and inefficiency of doing heavy-duty string manipulation in XSL.  
That means you're looking at an indexing speed of around 1 record/second.  If 
you've got 1,000,000 bib records, it'll take a couple of weeks just to index 
your data.  For comparison, the indexer of our commercial OPAC does about 50 
records per second (~6 hours for a million records) and the one I've written in 
Jython (by no means the fastest language out there) that doesn't use XSL can do 
about 150 records a second (about 2 hours for 1 million records).


I can transform 500,000 records from marcxml to solrxml in about 4
hours.  Then about 2 hours for importing to SOLR.
Considering time is NOT truly a factor, I think at this point it is
totally based on developer preference (assuming your XSLT process is not
2 weeks long).  Once you have your records in SOLR, you are all set.
You only need to re-run your transformation on a periodic basis to catch
records that change.  In our instance that might only be 5 - 10 records
per day.


2) Reusability:  What if you want to change how a field is indexed?  You would 
have to edit the XSLT directly (or have the XSL stylesheet automatically 
generated based on settings stored elsewhere).

a) Users of the indexer shouldn't have to actually mess with programming logic 
to change how it indexes.  You shouldn't have to know a thing about programming 
to change the setup of an index.

b) It should be easy for an external application to know how your indexes have 
been built.  This would be very difficult with an XSL stylesheet.  Burying 
configuration inside of programming logic is a bad idea.

c) The Solr schema should be automatically generated from your index setup so 
all your index configuration is in one place.  I guess you could write 
*another* XSL stylesheet that would transform your indexing stylesheet into the 
Solr schema file, but that seems ridiculous.

d) Automatic code generation is evil.  Blanchard's law: "Systems that require code 
generation lack sufficient power to tackle the problem at hand."  If you find 
yourself considering automatic code generation, you should instead be considering a more 
dynamic programming language.


I agree with your argument of abstracting your programming from your
data so that a non-tech-savvy librarian could modify the solr settings.
But if you modify the solr settings, you need to (at this point)
reimport all of your data which mean that you either have to change your
XSLT or your transformation application.  I personally feel that a
less-tech savvy individual can pickup XSLT easier than coding java.
Maybe I am understanding you incorrectly though.


3) Ease of programming.

a) Heavy-duty string manipulation is a pain in pure XSLT.  To index MARC records 
have to do normalization on dates and names and you probably want to do some 
translation between MARC codes and their meanings (for the audience & language 
codes, for instance).  Is it doable?  Yes, especially if you use XSL extension 
functions.  But if you're going to have huge chunks of your logic buried in 
extension functions, why not go whole hog and do it all outside of XSLT, instead of 
having half your programming logic in an extension function and half in the XSLT 
itself?


I can see your argument for this, however I like to abstract my layers
of applications as mentioned above.  So in this aspect, I have a script
the runs the XSLT.  Inside the script is also some logic that the XSLT
refers back to for the manipulation and massaging of the data.  I can
keep all XML related transformation logic in my XSL and all of my coding
logic in my script.  Again, I think it boils down to preference.


b) Using XSLT makes object-oriented programming with your data harder.

That's a bold statement.

  Your indexer should be able to give you a nice object representation of a 
record (so you can use that object representation within other code).  If you 
go the XSLT route, you'd have to parse the MARC record, transform it to your 
Solr record XML format, then parse that XML and map the XML to an object.  If 
you avoid XSLT, you just parse the MARC record and transform it to an object 
programmatically (with the object having a method to print itself out as a Solr 
XML record).

Honestly, all this talk of using XSLT for indexing MARC records reminds me of 
that guy who rode across the United States on a riding lawnmower.  I am looking 
forward to there being a standard

Re: [CODE4LIB] a few code4lib conference updates

2007-01-19 Thread Andrew Nagy

Nathan Vack wrote:

On Jan 19, 2007, at 9:51 AM, LaJeunesse, Brad wrote:


I must strongly encourage everyone attending to bring
fully-charged laptops and spare batteries (if you have them). The
auditorium has 60 power outlets available, which gives us roughly a
2:1
ratio of outlets to people.


Spare batteries rather expensive... but power strips are dead cheap.

Doesn't everyone travel with powerstrips in their laptop bag?

Maybe we could have some wireless power stations?
http://www.splashpower.com/

Andrew


Re: [CODE4LIB] Getting data from Voyager into XML?

2007-01-17 Thread Andrew Nagy

Nathan Vack wrote:

Unless I'm totally, hugely mistaken, MARC doesn't say anything about
holdings data, right? If I want to facet on that, would it make more
sense to add holdings data to the MARC XML data, or keep separate xml
files for holdings that reference the item data?

As others have said, you can get *some* holding data in a marcxml file,
but nothing that will help you.  Especially the holding data could
change at a moments notice.  You will have to get access to your
holdings data some other way on a real-time (or 15 - 30 minute) delay.

Andrew


Re: [CODE4LIB] Getting data from Voyager into XML?

2007-01-17 Thread Andrew Nagy

Bess Sadler wrote:

As long as we're on the subject, does anyone want to share strategies
for syncing circulation data? It sounds like we're all talking about
the parallel systems á la NCSU's Endeca system, which I think is a
great idea. It's the circ data that keeps nagging at me, though. Is
there an elegant way to use your fancy new faceted browser to search
against circ data w/out re-dumping the whole thing every night?

I will talk about this in my presenation at the conference.
Syncing every night is too infrequent if you ask me.  I considered
syncing like every 15 mintues, until I stepped back and looked at that
idea from a reality concept and laughed at myself.

Our system (going into beta next week!) is using realtime SQL calls for
location, status, etc. to our Voyager DB.

Andrew


Re: [CODE4LIB] Getting data from Voyager into XML?

2007-01-17 Thread Andrew Nagy

Nathan Vack wrote:

Hey cats,

I'm starting to think (very excitedly) about the Lucene session, and
realized that I'd better get our data into an XML form, so I can do
interesting things with it.

Anyone here have experience (or code I could steal) dumping data from
Voyager into... anything? I'm happy working in PHP, Java, Ruby, or
perl -- though happiest, probably, in Ruby.

Nate, it's pretty easy.  Once you dump your records into a giant marc
file, you can run marc2xml
(http://search.cpan.org/~kados/MARC-XML-0.82/bin/marc2xml).  Then run an
XSLT against the marcxml file to create your SOLR xml docs.

One thing I am hoping that can come out of the preconference is a
standard XSLT doc.  I sat down with my metadata librarian to develop our
XSLT doc -- determining what fields are to be searchable what fields
should be left out to help speed up results, etc.

It's pretty easy, I think you will be amazed how fast you can have a
functioning system with very little effort.

Andrew


Re: [CODE4LIB] lucene pre-conference - reminder

2006-12-19 Thread Andrew Nagy

Bess, do you have a set time for the pre-conference?  I need to change
my air flight reservations so I can make it.

Thanks
Andrew

Bess Sadler wrote:


Hey, code4libbers,

If you are attending code4lib con 2007, you might also want to attend
the one day pre-conference workshop about lucene and solr (and how to
use them to index / search / browse library collections). It will be
taught by the incomparable Erik Hatcher (author of _Java Development
with Ant_ and _Lucene in Action_). Registration is free, but seats
are limited, so if you want to attend please make sure to reserve a
spot. Registration consists of sending me an email and telling me you
plan to attend.

The following list are the people who have registered. If you're not
on this list, then I haven't reserved you a spot. Please let me know
asap if you plan to come so we can plan our seating and space needs.

Thanks!

Bess Sadler

People who have registered for the pre-conference:
Adam Soroka
Andrea Goethals
Andrew Darby
Andrew Nagy
Antonio Barrera
Art Rhyno
Bess Sadler
Dan Scott
Ed Summers
Edwin Sperr
Emily Lynema
Jonathan Gorman
Jonathan Rochkind
Kevin S. Clarke
Kristina Long
Michael Doran
Michael Witt
Mike Beccaria
Parmit Chilana
Peter Binkley
Ross Singer
Spencer McEwen
Steve Toub
Tito Sierra
Tom Keays
Winona Salesky






Elizabeth (Bess) Sadler
Head, Technical and Metadata Services
Digital Scholarship Services
Box 400129
Alderman Library
University of Virginia
Charlottesville, VA 22904

[EMAIL PROTECTED]
(434) 243-2305


Re: [CODE4LIB] code4lib lucene pre-conference

2006-12-13 Thread Andrew Nagy

Erik Hatcher wrote:


At this point, I'm planning on winging it with the datasets.  By late
February I will have (high on my TODO list now!) built a light-weight
Solr mechanism for bringing in MARC data, and perhaps more (iTunes
data files would make a fun one) and doing simple skinnable front-
ends on Solr.  Rails at least, but also demo the various formats that
Solr can output making it pluggable into whatever environment easily.


Erik, here is an XSLT doc I created for transferring MARCXML to SOLR
XML.  It has some PHP components in it that just make some of the ugly
marc data into something more friendly.  It also has some logic based on
our data, but is fairly generic.

I was hoping that during the preconference we could all discuss this
transformation process.  I have been working with our metadata librarian
on determining which fields should be included and which should be
grouped together for indexing and searching processes.  However, someone
out there might have some better ideas as how to be to transform the
data into SOLR.

Andrew

http://www.w3.org/1999/XSL/Transform";
xmlns:php="http://php.net/xsl";>

  

  

  

  

  


  
  
  

  
  
  

  
  
  

  

  


  

  

  

  
  
  

  
  
  

  
  
  

  
  
  

   

  
  
  

  
  
  

  
  
  

  
  
  

  
  
  

  

  

  

  

  

  

  

  

  

  

  


  

  

  


  


  


  
  


  


  

  




Re: [CODE4LIB] code4lib lucene pre-conference

2006-11-29 Thread Andrew Nagy

For an interesting read:
XQuery Processing with Relevance Ranking
Leonidas Fegaras
http://www.springerlink.com/content/em728eqn888nuer4/


Re: [CODE4LIB] code4lib lucene pre-conference

2006-11-29 Thread Andrew Nagy

Kevin S. Clarke wrote:


Fwiw Andrew, I'd suggest you are not seeing the "true spirit of your
NXDB."  Try to put MARC into a RDBMS and you are going to run into the
same problem.  You have to index intelligently or reorganize the data
(which is the default when you put XML into a RDBMS anyway).  Perhaps
a criticism of NXDBs could be that they make sound like they can
handle anything you throw at them without regard for what that is...
"If it is XML, we can handle it."


I agree, and that is why I have refactored the marcxml into a format
that I feel an NXDB can handle.  They cannot handle any XML format, and
I have heard confessions from the developers of these systems about this
point exactly.  It seems that we can all agree that both marc and
marcxml are bad formats!



Data can have a structure that makes it more accessible or less.  The
promise of XML (as a storage format rather than transmission format
(which is its other purpose)) is that you can work with data in its
native format (no deconstruction necessary).  However, there is
nothing about XML or NXDBs that makes one use a well structured data
format.


No, you are right.  NXDB's are too dumb to determine if your XML format
is going to work or not.  But the wonders of XSLT make it simple to
transform to another modified format that an NXDB can handle well.

So ... while we are on this topic.  You wouldn't want to index marcxml
records in lucene, you would use marc21, right?  Why deal with the
overhead of xml if it is not necessary.  We have to format our data no
matter what for to best fit our storage/search system.

Andrew


Re: [CODE4LIB] code4lib lucene pre-conference

2006-11-29 Thread Andrew Nagy

Clay Redding wrote:


Hi Andrew (or anyone else that cares to answer),

I've missed out on hearing about incompatabilites between MARCXML and
NXDBs.   Can you explain?  Is this just eXist and Sleepycat, or are
there others?  I seem to recall putting a few records in X-Hive with no
problems, but I didn't put it through any paces.


Yes, I have only done my testing with eXist and Sleepycat, but I also
have an implementation of MarkLogic that I would like to test out.  I
imagine though that all NXDBs will have the same problem.  This is the
heart of my proposed talk.  It has to do with the layout of marcxml.
Adding a few records to any NXDB will work like a charm, do your testing
with 250,000+ records and then you will begin to see the true spirit of
your NXDB.


Also, if there was a cure to the problems with MARCXML (I'm sure we can
all think of some), what would you suggest to help alleviate the
problems?


Sure, I know of a cure!  I have come up with a modified marcxml schema,
but as I am investigating SOLR further, I think the solr schema is also
a cure.

The problem with MARXML is the fact that all of the elements have the
same name and then use the attributes to differentiate them, (excuse my
while I barf) this makes indexing at the XML level very difficult,
especially for NXDBs.  I got a concurring agreement from main developers
of both packages (exist, berkeley) in this front.  My schema just puts
all of the marc fields into it's own element.  Instead of , I created a field called  and instead of all of the
subfields in multiple tags, i just put all of the subfields into one
element.  No one needs to search (from my perspective) the subtitle
("b") separately from the main ("a") title, so I just made a really
simple xml document that is 1/4 the size.  By doing this I was able to
take a 45 minute search of marcxml records and reduce it down to results
in 1 second.  The main boost was not the reduction in file size, but the
way the indexing works.

Give it a shot, I promise better results!

Andrew


Re: [CODE4LIB] code4lib lucene pre-conference

2006-11-28 Thread Andrew Nagy

Kevin S. Clarke wrote:


Have you had a chance yet to evaluate the 1.1 development line?  It is
supposed to have solved the scaling issues.  I haven't tried it myself
(and remain skeptical that it can scale up to the level that we talk
about with Lucene (but, as you point out, it is trying to do more than
Lucene too)).


I gave the 1.1 line a shot, but still saw abysmal results ... I sent
Wolfgang (the lead guy) my marcxml records and he implemented it in my
development environment and found the same issues.  The major problem
with it all is the ugly mess that is marcxml and it's "incompatability"
with native xml dbs.  Although, I still have some ideas that I have not
had a chance to test yet under the 1.1 branch.

I just finished coding our beta OPAC, so I am now heading back into my
load & scalability testing.  I am using Berkely DB XML which beats the
pants off of eXist in performance but has no where the feature set of
eXist.  I plan to re-test eXist 1.1 on my production server so I can get
a better handle on the speeds on a machine with a bit more beef.

I am also going to give this Nux a shot too.  Anyone out there using it?
http://dsd.lbl.gov/nux/index.html


Re: [CODE4LIB] code4lib lucene pre-conference

2006-11-28 Thread Andrew Nagy

Casey Durfee wrote:



I thought that was the point of using interfaces?  I guess I don't get why you 
need a standard to be compelled to do something you should be doing anyway -- 
coding to interfaces, not implementations.



Interfaces work well with like products (a database abstraction library
is a great example), however interfaces don't lend well to products that
achieve a similar goal but work differently altogether.  Relational
databases all work the same: there are databases, each database has
tables, views, procedures, etc. and each table has columns, etc.
However more infantile systems such as xml storage systems are hard to
map in a similar fashion.  I ran into this exact problem, I developed a
system around eXist and developed an interface for the data layer and a
"driver" for interacting with exist.  I then wanted to compare other
databases such as berkeley db xml.  I quickly found that they achieve a
common goal, but do not implement the same concepts making them very
hard to compare.  eXist has "collections" to group your xml into
distinct groupings and db xml does not.  In my interface I had a method
called getCollections, but since db xml does not have anything like
this, I could not use that method.  So now how would you develop an
interface that would include various xml databases as well as full-text
index systems such as lucene, etc.  I would image this would be very
challenging.


Re: [CODE4LIB] code4lib lucene pre-conference

2006-11-28 Thread Andrew Nagy

Gabriel Farrell wrote:


A Google search on "lucene
xquery parser" (no quotes) brings up Nux and Jackrabbit.  I don't know
much about either project, but they seem to be working already on the
future we're talking about.



Now this sounds promising.  This is exactly what I would be looking
for.  An XQuery interface to lucene.  Or, what Thom has said.  Maybe a
system that allows multiple interfaces to lucene: XQuery, Sru,
opensearch, etc.

Andrew


Re: [CODE4LIB] code4lib lucene pre-conference

2006-11-28 Thread Andrew Nagy

Kevin S. Clarke wrote:


By the way, I see a very interesting intersection between Solr and
XQuery because both are speaking XML.  You may have XQueries that
generate the XML that makes Solr do it's magic for instance.  This is
an alternative to fulltext in XQuery, sure... it is something that is
here today (doesn't mean I'll stop thinking about tomorrow though).


There is a good intersection, but if you look at the roadmap for eXist
(native xml database) they have many of the features that solr offers
(im still in the process of setting up solr so I am not too indepth with
the features yet).  eXist is basically an attempt at this intersection.
Too bad it's just too damn slow and still in it's infancy stages.

Andrew


Re: [CODE4LIB] code4lib lucene pre-conference

2006-11-28 Thread Andrew Nagy

Art Rhyno wrote:


I made a big mistake along the way in trying to work with Voyager's call
number setup in Oracle, and dragged Ross along in an attempt to get past
Oracle's constant quibbles with rogue characters in call number ranges.
The idea was to expose the library catalogue as a series of folders using
said call number ranges. This part works well enough when the characters
are dealt with, but breaks down a bit for certain formats. For example,
the University of Windsor lumps most of its microfiche holdings in one
call number with an accession number, and Georgia Tech does something
similar with maps. This can mean individual webdav folders with many
thousands of entries, and some less than elegant workarounds.



So you are replacing SQL calls with WebDAV?  Can you explain this a bit
further?

Andrew


Re: [CODE4LIB] code4lib lucene pre-conference

2006-11-28 Thread Andrew Nagy

Erik Hatcher wrote:


"What if" games are mostly just guessing games in the high tech
world.  Agility is the trait our projects need.  Software is just
that... soft.  And malleable.  Sure, we can code ourselves into a
corner, but generally we can code ourselves right back out of it
too.  If software is built with decent separation of concerns, we can
adapt to changes readily.


I completely agree, but you can't deny it's a valid concern.  I am
always thinking about the future and making sure my software is modular
and flexible so any part can easily be replaced.  So I would hope it's
as easy as just writing a new "driver" for a new system that you want to
replace with.

Anyway, you have all convinced me to give solr a whirl ... im
downloading it right now.

Andrew


Re: [CODE4LIB] code4lib lucene pre-conference

2006-11-27 Thread Andrew Nagy

Binkley, Peter wrote:


There would probably be a lot of optimizations you could do within Solr
to help with this kind of thing. Art and I talked a little about this at
the ILS symposium: why not nestle the XML db inside Solr alongside
Lucene? Solr could then manage the indexing of the contents of the db,
and augment your search results with data from the db: you could get
full records as part of your search results without having to store them
in the Lucene index.



At this point, why use a DB?  Just store your records in your server
file system.  It's fast and less applications to worry about
maintaining.  If your search matches 5 records, just open those 5 files
on your server.

Good conversations ... getting excited for the conference already!

Andrew


Re: [CODE4LIB] code4lib lucene pre-conference

2006-11-27 Thread Andrew Nagy

Casey Durfee wrote:


I am writing a Solr-powered OPAC right now and have not had any performance 
problems (either indexing or querying) using Solr for both data storage and 
search.  You can indicate in Solr whether you want particular data fields to be 
stored, indexed or both.  So I stick the entire MARC record in a Solr field but 
don't index on it.



This is good to know.  What I did was strip down my marcxml records to
only include fields that are needed for searching.  I also formatted the
marcxml fields so that all subfields were in one main field.  For
example I have an element in my xml document called T245 which has the a
and b subfields but not the c subfield, etc.  This way my indexes are
much more compact and the database is as well which made my native xml
database implementation from completely worthless to usable.  But I am
still not totally happy with the performance.



Just using Solr has proven to be much faster than doing the search in Solr and 
then retrieving full data from another database.  This also has the advantage 
of making it so there's only one thing you gotta keep in sync with the ILS.  
The only data that my OPAC needs to talk to a SQL database for is item-level 
information, which changes too often to keep synced.


My only concern about lucene is the lack of a standard query language.
I went down the native XML database path because of XQuery and XSL, does
something like lucene and solr offer a strong query language?  Is it a
standard?  What if someone developed a kick ass text indexer in 2 years
that totally blows lucene out of the water, would you easily be able to
switch systems?

Andrew


Re: [CODE4LIB] code4lib lucene pre-conference

2006-11-27 Thread Andrew Nagy

Bess Sadler wrote:


Hi, Andrew. Since this will be an all-day event, the session would be
starting first thing in the morning on Feb 27. I'm thinking 9am, but
I haven't confirmed that with anyone else. I'm just flying by the
seat of my pants here.


I wouldn't be able to make this then due to time constraints.


That way you can use solr / lucene for search, faceted
browse, etc, and your XML database only for known item retrieval,
which it is generally able to do without performance issues.


I am doing something similar except I am using my file system as my
database for pulling the full marcxml records.  This offers little
overhead as possible.  Now think about the possibilities of using
something like lucene or postgres as your filesystem.  There are many
groups working on these such filesystems for years.


I'm
hopping up and down waiting for someone to take this approach with an
ILS, so please come and show us what you've got!


I have proposed a talk on my trials and tribulations of developing this
at this years code4lib conference.  If it is accepted I will share all
the gory details.

BTW, have you played with Hadoop?  I guess it's something like the
open-source attempt to google's search algorithm.  I would be curious
about implementing hadoop across a few servers to store the marcxml records.

Andrew


Re: [CODE4LIB] code4lib lucene pre-conference

2006-11-27 Thread Andrew Nagy

Bess Sadler wrote:


Enough people are interested in ILS related topics that it might be
worth forming groups around specific ILS products. If you are one of
these people, email the list if you're interested in setting up such
a thing.


Bess, this sounds like a great conversation.  You can count me in.
Could you please describe the time for when this might occur as I have
already booked my flight into Atlanta for late in the afternoon so I
would need to change that if you plan on having the session earlier in
the day.

We just last week finished up the beta release of our new OPAC that is
based on a native XML Database based with modified MARCXML records, but
have been somewhat disappointed with the performance of the XML Database
search times.  I have been considering looking at other options such as
lucene based products (XTF, etc).   This would be a great topic for me.

Thanks!
Andrew


[CODE4LIB] Announcing the Villanova University Digital Library

2006-09-06 Thread Andrew Nagy

The staff of Falvey Memorial Library proudly announces the grand opening
of the Villanova University Digital Library.

The Digital Library is a repository of many digitized items from our
Special Collections as well as other donated items and partnering
institutions. The repository was developed by library staff and built
from an open source platform. The repository uses a native XML database,
eXist, to store and organize our digital objects encoded in the METS
format. The web site allows for users to search and view all of the
items stored in the repository by using many of the wonderful XML
technologies such as XQuery and XSLT.

Noteworthy initial digital collections include: the complete collection
of Cuala Press Broadsides, notable as a primary source for many folk
songs and for the illustrations of Jack Yeats – brother of the Poet
laureate; a signed and edited copy of Memoranda During the War by Walt
Whitman; personal letters and books from the Joseph McGarrity Collection
dealing with Irish and Irish-American History, an illuminated manuscript
of selections from the Holy Koran, and plenty more! We will be
constantly adding more and more items, so please check back often.

Feel free to browse our collections and enjoy the wonderful images:
http://digital.library.villanova.edu

Enjoy,
Andrew Nagy


[CODE4LIB] blocked IRC

2006-02-28 Thread Andrew Nagy

My university has blocked the standard IRC port due to massive trojan
traffic.  Does the freenode.net irc server allow any other non-standard
ports?  I checked their website, but their is no mention of ports
(http://www.google.com/search?hl=en&q=site%3Awww.freenode.net+port&btnG=Google+Search)

Thanks
Andrew


[CODE4LIB] Code4Lib Code Sharing - (Re: [CODE4LIB] journal)

2006-02-22 Thread Andrew Nagy

Art Rhyno wrote:


I guess I am looking for more recipe sharing, comments in the margins, and
whiteboarding, I wouldn't want to break or detract from anything that is
working now. All of this happens virtually at some level, but there's
still some impedance when compared to lightning talks and physical
gatherings, and sadly, there's a limit to how many conference events can
be mounted.


I spoke with some of you at the restaurant Thursday night during the
conference briefly about a code sharing site for code4libbers.  The
OSS4Lib site is a great resource for libraries considering the world of
open source; however, I think much like the CPAN or PEAR code
repositories, a code4lib repository would act as a great place for us to
"whiteboard" and share code much like Art mentions above.  A place to
not share full blown applications, but a place to post our hacks and
code libraries that we have written.

As a member of the PEAR development community, I know that the web
application that runs this is open-source and has ties to CVS for code
management, Bug tracking much like bugzilla, and most importantly, a
place to write documentation for each code snippet.  I'm not necessarily
advocating for an implementation of the PEAR web application, but it's
concept for the Code4Lib website.  Maybe even a DSpace implementation
would work?  Even a CVS or Subversion repository at it's simplest would
help.

Would the maintainers of the Code4Lib website be willing to
support/implement such an application?

Thanks, I really enjoyed the conference.
Andrew


Re: [CODE4LIB] PHP and SSL

2006-01-20 Thread Andrew Nagy

Jeff, SSL has nothing to do with PHP and is something that is under the
control of the web server.

Are you using Apache, or ... dare I say it ... IIS?  Either way the
vendor of the web server you are using will have plenty of documentation
on setting up an SSL protected web site.

Hope this helps
Andrew

Jeffrey Barnett wrote:


Can someone tell me how to enable https for a particular php script?  I
was just looking at the newly created Library Success Wiki
http://www.libsuccess.org/ and noticed that its login page is
unencrypted.  I mentioned this to the Wiki admin who then asked me how
to fix it.  Unfortunately I know zilch about php.  Is there a simple
answer?

PS: The site is "powered by MediaWiki" http://www.mediawiki.org/ but a
search of their documentation for https returns no result.


[CODE4LIB] XML Database Server Requirements (Re: Catalog Enhancements)

2005-12-16 Thread Andrew Nagy

Roy, any word on your server specs?

Does anyone else have an index of XML records for their catalog?  If so,
how well equipped is your server?  I am concerned that an extremely
large XML index will need LOTS of RAM.

Thanks
Andrew

Roy Tennant wrote:


Short answer now, longer/better answer next week when someone gets
back in the office. We have 4.5 million records indexed at the
moment, but have had up to 9 million indexed. Our dev system runs on
a Unix server (specs to come) that runs other apps as well. I'm not
sure if we can share the crude search interface so you can judge the
response, but will try to find out.
Roy

On Dec 2, 2005, at 12:36 PM, Andrew Nagy wrote:


Roy Tennant wrote:


Andrew, just as an additional data point, we have millions of records
indexed in our Lucene-based XTF system, and the response isn't too
bad even on a development server.



Can you and others on this list briefly describe your hardware
platform
for this?  I am assuming this is not running on an old 486 that is
lying
around in your office :)

Do you feel that the searching is processor intensive and may be best
suited for a load balanced infrastructure?  I am implementing my pilot
using eXist which stores the XML Database in B Trees which from my
knowledge is an in memory data structure so therefor the machine would
need lots of ram however I am curious as to the processing
requirements.

Thanks, you guys rock!

Andrew



Re: [CODE4LIB] Catalog Enhancements & Extensions (Re: mylibrary @ockham)

2005-12-02 Thread Andrew Nagy

Roy Tennant wrote:


Andrew, just as an additional data point, we have millions of records
indexed in our Lucene-based XTF system, and the response isn't too
bad even on a development server.


Can you and others on this list briefly describe your hardware platform
for this?  I am assuming this is not running on an old 486 that is lying
around in your office :)

Do you feel that the searching is processor intensive and may be best
suited for a load balanced infrastructure?  I am implementing my pilot
using eXist which stores the XML Database in B Trees which from my
knowledge is an in memory data structure so therefor the machine would
need lots of ram however I am curious as to the processing requirements.

Thanks, you guys rock!

Andrew


Re: [CODE4LIB] code4libcon

2005-11-18 Thread Andrew Nagy

Neat, are there any topics yet decided?  What topics are you looking
for, any suggested topics?

Andrew

Edward Summers wrote:


This is very preliminary, but since folks on this list [1] have been
instrumental in getting things this far we figured it was best to at
least ping everyone to see if there are others interested in helping
plan, brain storm, cheer, boo, lurk, etc...

   http://www.code4lib.org/code4libcon

Follow that url and you'll find a barebones wiki for a nascent
workshop/camp type of gathering of library software enthusiasts and
their friends, partners in crime, etc. The scope and content of the
workshop are still in flux, but we've got a location and a date
(Oregon State Univ, Feb 15-17) so the hard part is done already.

Included in the wiki is a URL for the public cod4libcon discussion
list. If you are interested in the details feel free to sign up at:

   https://lists.gatech.edu/sympa/info/code4libcon

We haven't even decided on the name yet, so if you've got ideas for
the name, content or activities please join up, and/or drop into
irc://irc.freenode.net/code4lib.

More polished urls and details to follow...

//Ed

[1] Daniel Chudnov, Edward Corrado, Andrew Forman, Jeremy Frumkin,
Brad LaJeunesse, Art Rhyno, Ross Singer, Ed Summers, Roy Tennant


Re: [CODE4LIB] PHP Question--Dynamically Naming then Calling a Variable

2005-11-14 Thread Andrew Nagy

Andrew, I think Ross's reply covers what you are asking for.  But it
seems that you may not be doing what  you are trying to do the best
possible way.  If you want to explain what you are trying to do in more
depth, we may be able to help you devise a more elegant solution.

Andrew Nagy

Andrew Darby wrote:


Hello, all.  I apologise for once again posting a mundane question
(rather than an interesting new idea), but this has vexed me for months
now (in different incarnations).  The problem:

I'm passing POST variables (using import_request_variables) with the
prefix $postvar_ , i.e., $email from page 1 becomes $postvar_email in
page 2.

Now, I want to dynamically assemble this sort of variable in a function,
like so:

function makeHiddenInputs ($variable_list) {
$hidden_vars = explode(" ", $variable_list);

foreach ($hidden_vars as $value) {
print "\n";

}

}

The call for the function would look like this:

makeHiddenInputs("title authors periodical volume issue page year
language keywords agency");

I'm trying to fill the value of the hidden input with the contents of
the POST variable, i.e., the one with the name $postvar_title (or
whatever), but it doesn't work that way. It just passes the $title
variable from within the function, not the contents of $postvar_title.
How should I be doing this?

Thanks in advance,

Andrew


Re: [CODE4LIB] Catalog Enhancements & Extensions (Re: mylibrary @ockham)

2005-11-09 Thread Andrew Nagy

Roy Tennant wrote:


Andrew, just as an additional data point, we have millions of records
indexed in our Lucene-based XTF system, and the response isn't too
bad even on a development server.


Roy, do you feel that XTF is able to give you the performance that you
are looking for?

I am currently evaluating my options and am looking at eXist and Berekly
DB.  eXist looks nicer, just because it seems easier to install.
Can you or anyone else explain how you can tie in an indexer such as
Lucene/Plucene to an XML database?  Is an indexer necessary if the XML
DB implementation already has an indexer?  It seems that everyone who
replied to my original message is using some sort of indexer in top of
their XML DB, why?

We are also thinking about how to store the status information
(available or checked out).  How, where and when do we store this
information?  Do we grab it every 30 minutes and add it to the XML
database?  Do we store the info in a seperate DB, etc.

Thanks for all the help, you guys rock!

Andrew


  1   2   >