On Thu, May 11, 2006 at 01:30:17AM +1000, Jamie Wilkinson wrote:
> This one time, at band camp, Rob Sharp wrote:
> >I'm thinking I somehow need to split this file into manageable chunks to
> >import it, but the script I coded in PHP can't handle files of that size.
> >
> >Anyone have any pointers on how I can split the file based on the string
> >'CREATE TABLE' or something like that? A file per table would be fine to
> >import.
>
> To get the line numbers of the CREATE lines:
> zgrep -n '^CREATE DATABASE' dump.sql.gz
>
> Split at the line numbers:
> zcat dump.sql.gz | sed -n 'LINE1,LINE2p' > somewhere
>
> Theres probably some better magic you can do with csplit.
Excuse me while I reach for my awk hammer again.
zcat dump.sql.gz | awk 'BEGIN { outfile = "preamble"; x = 1 }
$0 ~ /^CREATE TABLE/ {
close(outfile); x = x+1
outfile = x"-"substr($3,2,length($3)-2); }
{ print $0 >> outfile }'
Will give you everything beforethe first CREATE TABLE in a file named
preamble, and all of the other tables in file names numbered
sequentially with the table name appended. I figure that keeping count
will help assemble the tables in the same order as in the original file.
Have tested quickly one a dump from one of my MySQL databases and
everything seems to be kosher. But check your own data before taking my
word for it. :-)
Cheers,
--
Pete
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html