On Fri, 6 Jun 2003, Jamin Roth wrote: > Date: Fri, 6 Jun 2003 17:27:30 -0700 > From: Jamin Roth <[EMAIL PROTECTED]> > To: [EMAIL PROTECTED] > Subject: Vague DBI Error > > I designed a simple script to backup my MySQL databases. Basically it loops > through all the databases then through each table. This section of code > retreives the table and writes it to a csv file. When I execute the code at > the end of this message on a table with 2.7 million rows it works fine. > When it's executed on 4.7 million rows it exits out with only "Terminated" > as the error message. Is there any way I can get more information about > what is going on? Could MySQL be timing out? Should I split up the table > if it is over 2.5 million records (just do a limit in the SQL statement)? > > Thank you for any help you can provide, >
There should(I hope) be no need to split up the result set, but I can't think of what would be causing your problem. When do you get the error? When you call execute(), or in the middle of the fetch? If so how many rows were fetched before it dies? &c. &c. &c. DBI->trace(2) will tell you what DBI is doing, but it will generate *much* data if you try running it on 2+ M rows. Also, Take a look at 'SELECT ... INTO OUTFILE ... ' this might get you where you want to go a bit faster and easier, but if you stick with perl, I'd suggest looking at TEXT::CSV_XS which is much better than trying to hand-roll the csv generation. -r
