You're welcome, Mike... geeze, you are a fast adaptor.... While I have your attention though, might I ask about alternate approaches to pg-loader.pl. Because it keeps EVERTHING in memory til its done reading the complete input file, I get into severe swapping after 30000 records or so... (on a system with 1gb memory). I've seen others mention similar problems. Yes I could break my input into chunks, but...
In the past, I've used a technique where I wrote separate files for each table of data, piped those to a 'sort unique', and then done a bulk copy to load the individual tables. Is that something that you (or others) would find useful? I'd still like to get to a place where I could load a fresh evergreen d/b every week or so from a dump of my 5 million bibliographic records. I do do that now (in 3 or 4 hours, elapsed) with my Simple OPAC Backup at http://library.wlu.ca/searchme, and don't see why I can't accomplish the same feat with evergreen. don >>> [EMAIL PROTECTED] 02-Aug-2007 1:37 PM >>> [snip] Hope that helps in the future, and thanks for the idea! --miker
