Hi Everyone,

I am tasked with importing a large set of bibliographic marc records (under 1 
million).

I have leveraged work from Jason Stephenson, 
http://git.mvlcstaff.org/?p=jason/backstage.git;a=summary.

The import script has been modified to instead of doing the update, to create 
sql files with the update commands.  These files have about 10,000 records per 
file.  This bypasses checking with database and just creates the update scripts 
off of the marc records.

These files are then batch processed.

This process ignores overlay profiles, which was deemed not needed for this 
process.

Before the update, triggers on the biblio.record_entry are turned off, 
particularly the reingest.  We run a full reingest after all the records have 
been updated.

My reason for posting this is to get feedback from others who are charged with 
updating a large set of bib records (over 500,000) about the way in which they 
succeeded and also pitfalls.

Kyle Tomita
Developer II, Catalyst IT Services
Beaverton Office
[cid:[email protected]]



Sent from my Verizon Wireless 4G LTE smartphone

Reply via email to