Ok something weird is going on with this import. I started it on Friday and it was still running on Monday. I came in this morning and it was finished but with an error. Can't locate object method "class_name" via package "HU" (perhaps you forgot to load "HU"?) at pg_loader.pl line 48, <> line 1648134328. Once I saw that then I decided to try a smaller number of imports at once, that one was 3500. So I had a file with 1000 records in it to try. It started off really slow. I stopped it and tried to restart all of the Evergreen services. I was informed by the services that there wasn't enough free space to start up. So I started checking around. I found out that the osrfsys.log file was almost 120GB in size! It had all of the entries in there from the previous import. So I erased the log file. It automatically created another osrfsys.log file and started putting entries in it, again from the last import. I finally had to kill the perl process so that I could completely erase the log. Now that I have it erased I can't get the router user to connect to the jabber server. Can someone give me some insight as to why this has happened and what I might be doing wrong to cause it to happen?
On Mon, Jun 2, 2008 at 12:22 PM, Dan Scott <[EMAIL PROTECTED]> wrote: > 2008/6/2 Robert <[EMAIL PROTECTED]>: > > Hey guys, any news on why the copies or volumes might not have copied > over? > > Also, can someone tell me in their experience in importing records what > the > > maximum they imported at once? I tried to import a file that had 3500 > > records in it over the weekend and it is still running and looks to be > hung > > up. Just out of curiosity. > > 1) The steps listed for the Gutenberg records get bibliographic > records into the system, but no call numbers or copies. That's what > the import_demo package tries to demonstrate: > http://svn.open-ils.org/trac/ILS-Contrib/wiki/ImportDemo The approach > in the import_demo takes you through the steps for getting bib records > into the system, then goes beyond that to parse holdings statements > directly from the MARC21XML for the bib records and generates call > numbers and copies to load into the system. This isn't necessarily the > best approach for getting call numbers and copies into your system, > but you're going to have to tailor your approach to the system you're > working with. > > 2) The most bib records I have imported in a single file is somewhere > around 2 million. This weekend I was importing approximately 360,000 > bib records from a single file. Note that you really want to be using > the parallel_pg_loader.pl approach (as demonstrated in import_demo) if > you're working on a system with memory constraints. > > -- > Dan Scott > Laurentian University >