Rob Sharp wrote:
Sluggers,

I've got a large (2.6G) MySQL dump containing extended SQL inserts which
I need to import onto a server.

Normally, I'd import using something like:

        sudo nice mysql itm_integ < sqldump.sql

but in this case it grinds the server into the ground, presumably
because it loads the file into memory/swap as it imports.

I'm thinking I somehow need to split this file into manageable chunks to
import it, but the script I coded in PHP can't handle files of that size.

Anyone have any pointers on how I can split the file based on the string
'CREATE TABLE' or something like that? A file per table would be fine to
import.

there could be various reasons for this. e.g. if there are indexes in the table-structure, you could speed up the the insert with dropping the indexes first, importing the data and create the indexes after insert again. this is because ususally autocommit is turned on and the indexes are updated after each commit. and recalculation of indexes for a bunch of rows is much faster than for each row.

if you are creating the dump by yourself, you could use the "-T" option of the mysqldump-tool. this will create separated files for data and create. check also "load data" for how to import the files generated.

how about splitting an sql into smaller chunks, sorry, i have no idea.
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Reply via email to