On Fri, Aug 8, 2008 at 10:21 AM, John Craig <[EMAIL PROTECTED]> wrote: > > 2008/8/6 Brandon W. Uhlman <[EMAIL PROTECTED]>: > > > I have about 960 000 bibliographic records I need to import into an > Evergreen system. The database server is dual quad-core Xeons with 24GB > of > RAM. > > Currently, I've split the bibliographic records into 8 batches of ~120K > records each, did the marc_bre/direct_ingest/parellel_pg_loader dance, > but > one of those files has been chugging along in psql now for more than 16 > hours. How long should I expect these files to take? Would more smaller > files load more quickly in terms of total time for the same full > recordset? > > > > Just for what it's worth. My experience is that given a large number of bibs > to import, more smaller batches complete faster than few larger batches. >
John, thanks for that datapoint! I haven't attempted to measure the differences, but there is certainly a difference at COMMIT time with larger transactions taking longer. -- Mike Rylander | VP, Research and Design | Equinox Software, Inc. / The Evergreen Experts | phone: 1-877-OPEN-ILS (673-6457) | email: [EMAIL PROTECTED] | web: http://www.esilibrary.com
