> I've kinda-sorta been following this, so I don't know if you've answered
> these questions already.  When trying to simulate similar behavior on my
> Linux box, I can generate the 'Terminated' message by sending the
> process a SIGTERM.  I did this using a 'kill -15 pid' command, from
> another xterm.  I imagine this would work on any Unix.  I'm not familiar
> with the inner workings and hidden mechanisms of DBI, but I doubt it
> would generate a SIGTERM of its own volition.  As previously mentioned,
> this sounds like it may be happening external to the process doing the
> export.  That is, either another OS process is sending your process the
> SIGTERM, or the OS itself is sending it.  I'm wondering if you can have
> better luck with an strace, rather than a DBI trace, since this seems to
> be an OS-related issue.
>
> Just a thought,
> Dave
>
>  On Jun 9, Jamin Roth scribed:
>
>> The die isn't run.  I just get "Terminiated" in the middle of the
>> script running ->execute.  This is what is frustrating.  I ran
>> DBI->Trace and this is what I got:
>>
>> Creating 192.168.50.215_pathtracker1_mnhtn_stats_detail.csv    ->
>> prepare for DBD::mysql::db (DBI::db=HASH(0x81b12e4)~0x81b129c 'SHOW
>> COLUMNS FROM mnhtn_stats_detail') thr#804b3c0
>> Setting mysql_use_result to 0
>>     <- prepare= DBI::st=HASH(0x81b1374) at Backup.pl line 85
>>     -> execute for DBD::mysql::st (DBI::st=HASH(0x81b1374)~0x8063644)
>> thr#804b3c0
>>     -> dbd_st_execute for 081b120c
>>     <- dbd_st_execute 10 rows
>>     <- execute= 10 at Backup.pl line 87
>>     -> fetchrow_arrayref for DBD::mysql::st
>> (DBI::st=HASH(0x81b1374)~0x8063644) thr#804b3c0
>>     -> dbd_st_fetch for 081b120c, chopblanks 0
>>     <- dbd_st_fetch, 6 cols
>>     <- fetchrow_arrayref= [ 'ID' 'bigint(20) unsigned' '' 'PRI' undef
>> 'auto_increment' ] row1 at Backup.pl line 89
>>     -> $DBI::err (*) FETCH from lasth=HASH
>>     <- $DBI::err= undef
>> Preparing SELECT * FROM mnhtn_stats_detail
>>     -> prepare for DBD::mysql::db (DBI::db=HASH(0x81b12e4)~0x81b129c
>> 'SELECT
>> * FROM mnhtn_stats_detail') thr#804b3c0
>> Setting mysql_use_result to 0
>>     <- prepare= DBI::st=HASH(0x80637c4) at Backup.pl line 104
>>     -> DESTROY for DBD::mysql::st (DBI::st=HASH(0x8063644)~INNER)
>> thr#804b3c0
>>     <- DESTROY= undef at Backup.pl line 106
>> Executing SELECT * FROM mnhtn_stats_detail
>>     -> execute for DBD::mysql::st (DBI::st=HASH(0x80637c4)~0x806380c)
>> thr#804b3c0
>>     -> dbd_st_execute for 08063878
>> Terminated
>>
>> Any Ideas would be great,
>>
>> Jamin Roth
>> Systems/Network Administrator
>> Sorensen Associates Inc
>> Phone: (503) 665-0123 ext 234
>> Fax: (503) 666-5113
>> http://www.sorensen-associates.com/
>>
>>
>> -----Original Message-----
>> From: Michael A Chase [mailto:[EMAIL PROTECTED]
>> Sent: Friday, June 06, 2003 9:43 PM
>> To: Jamin Roth; [EMAIL PROTECTED]
>> Subject: Re: Vague DBI Error
>>
>>
>> On Fri, 6 Jun 2003 17:27:30 -0700 Jamin Roth <[EMAIL PROTECTED]>
>> wrote:
>>
>> > I designed a simple script to backup my MySQL databases.  Basically
>> it loops through all the databases then through each table.  This
>> section of code retreives the table and writes it to a csv file.
>> When I execute the code at the end of this message on a table with
>> 2.7 million rows it works fine.  When it's executed on 4.7 million
>> rows it exits out with only "Terminated" as the error message.  Is
>> there any way I can get more information about what is going on?
>> Could MySQL be timing out?  Should I split up the table if it is
>> over 2.5 million records (just do a limit in the SQL statement)?
>>
>> Splitting the table should not be necessary.  I don't see any reason
>> why 4.7 million rows should cause an early termination.  More details
>> in your die() arguments might help you figure out where the problem is
>> occuring.  You may simply have an invalid value returned by
>> $sth->rows() before all rows have been fetched.
>>
>> More comments for tightening up the script are included below.
>>
>> > #!/usr/bin/perl -w
>> >
>> > # #################################################################
>> # # This Script performs a backup of the specified host
>> > # #################################################################
>> >
>> > use DBI;
>> > use DBD::mysql;
>>
>> # If a script won't run with 'use strict;', it needs to be fixed. use
>> strict;
>>
>> > $|=1;
>>
>> # I can see some sense having $d, $defaulthost, $port, $user, and #
>> $password as file level variables because they never change
>> #   during a run.  However they ought to be declared with 'my'.
>>
>> > #$db    = 'pathtracker';
>> > $defaulthost    = '192.168.50.215';
>> > #turkey ip = 192.168.50.197
>> > #pteranodon ip = 192.168.50.215
>> > $port = "3306";
>> > $user = "backup";
>> > $password = "12345";
>>
>> # These variables should be subroutine scoped variables.
>>
>> > $dsn = '';
>> > $dbh = '';
>> >
>> > # Use standard comma for delimination
>> > $d = ",";
>> >
>> > sub BackupHost { # Send host
>>
>>     # A list is easier to maintain than separate $_[x] assignments. my
>> ( $host ) = @_;
>>
>> #>         my $host = $_[0];
>> >
>> >         $dsn = "DBI:mysql:host=$host;port=$port";
>> >         $dbh = DBI->connect($dsn,$user,$password)
>> >                 or die "Can't connect to the database:
>> $DBI::errstr\n";;
>>
>> # The second ';'s are redundant.
>>
>> > #       print "Finding Databases for $host\n";
>> >         my $SQLQuery = 'SHOW DATABASES';
>> >         my $sth = $dbh->prepare($SQLQuery)
>> >                 or die "Can't prepare SQL statement:
>> $DBI::errstr\n";;
>> >         $sth->execute()
>> >                 or die "Can't execute SQL statement:
>> $DBI::errstr\n";;
>>
>>   # Using bind_columns() allows a much simpler loop structure.
>>   # It's also much more efficient.
>>   my $database;
>>   $sth -> bind_columns( \$database );
>>   while ( $sth -> fetch ) {
>>
>> #>         while(my $databases = $sth->fetchrow_arrayref) {
>> #>                 $database = $$databases[0];
>> >                 print "Performing backup of $database database on
>> > $host\n";
>>
>> # There's no reason to generate $dsn here if you aren't
>> #   passing it to BackupDatabase().
>>
>> >                 $dsn =
>> > "DBI:mysql:database=$database;host=$host;port=$port";
>>
>> # You should use a separate connection handle.  The current $dbh is #
>>  needed for the $sth in this subroutine to work.
>> # It also makes more sense to make the database specific connection #
>>  inside BackupDatabase().
>>
>> #>                 $dbh = DBI->connect($dsn,$user,$password);
>> >                 # Backup each database in host
>> >                 BackupDatabase($host,$database);
>> >         }
>> >         warn "Data fetching terminated early by error:
>> $DBI::errstr\n" if
>> > $DBI::err;
>> >         $dbh->disconnect or warn "Error disconnecting:
>> $DBI::errstr\n";
>> > }
>> >
>> >
>> >
>> > sub BackupDatabase { # Send host and database
>>
>>      my ( $host, $database ) = @_;
>>
>> #>         my $host = $_[0];
>> #>         my $database = $_[1];
>>
>>   # Unless your die() arguments say more about what statement failed,
>> #   you'd be better off to just set RaiseError to 1.
>>   my $dsn = "DBI:mysql:database=$database;host=$host;port=$port"; my
>> $dbh = DBI -> connect( $dsn, $user, $password,
>>       { AutoCommit => 0 } )
>>       or die "Backup Connect to $database failed, $DBI::errstr\n";
>>
>> >         my $SQLQuery = 'SHOW TABLES';
>> >         my $sth = $dbh->prepare($SQLQuery)
>> >                 or die "Can't prepare SQL statement:
>> $DBI::errstr\n";
>> >         $sth->execute()
>> >                 or die "Can't execute SQL statement:
>> $DBI::errstr\n";;
>>
>>    my $table;
>>    $sth -> bind_columns( \$table );
>>    while ( $sth -> fetch ) {
>>
>> #>         while(my $tables = $sth->fetchrow_arrayref) {
>> #>                 $table = $$tables[0];
>> >                 # Backup the table to CSV
>>
>>   # Parameters are easier to control than file level variables.
>>   BackupTable( $dbh, $host, $database, $table );
>>
>> #>                 BackupTableToCSV($host,$database,$table);
>> >         }
>> >         warn "Data fetching terminated early by error:
>> $DBI::errstr\n" if
>> > $DBI::err;
>>
>>      $dbh -> disconnect;
>>
>> > }
>> >
>> >
>> > sub BackupTableToCSV { # Send host, database and table
>>
>>     my ( $dbh, $host, $database, $table ) = @_;
>>
>> #>         my $host = $_[0];
>> #>         my $datbase = $_[1];
>> #>         my $table = $_[2];
>> #>
>> #>         $outfile = "$host\_$database\_$table.csv";
>> #>         open(OUTFILE, ">${outfile}") || die "Cannot open $outfile
>> for #> output!\n";
>>
>>    # The original open doesn't say why the open failed.
>>    my $outfile = "${host}_${database}_$table.csv";
>>    open( OUTFILE, "> $outfile" ) or die "Can't open $outfile, $!\n";
>>
>> >         print "Creating $outfile";
>> >
>>
>>   # Note that @row has no connection to $row.
>>   # It is more efficient to get the column name list from
>> $sth->{NAME}.
>>
>> #>         # Print column names
>> #>         my $SQLQuery = "SHOW COLUMNS FROM $table";
>> #>         my $sth = $dbh->prepare($SQLQuery)
>> #>                 or die "Can't prepare SQL statement:
>> $DBI::errstr\n"; #>         $sth->execute()
>> #>                 or die "Can't execute SQL statement:
>> $DBI::errstr\n";; #>
>> #>         my $row = $sth->fetchrow_arrayref();
>> #>         my $line = '';
>> #>         my $found = 0;
>> #>         foreach $field(@row) {
>> #>                 if($found > 0) {$line = $line.$d; }
>> #>                 $line = $line.$field;
>> #>                 $found = 1;
>> #>         }
>> #>         warn "Data fetching terminated early by error:
>> $DBI::errstr\n" if #> $DBI::err;
>> #>         print OUTFILE $line."\n";
>> >
>> >
>> >         # Print table contents
>> >         $SQLQuery = "SELECT * FROM $table";
>> >         print "Preparing $SQLQuery\n";
>> >         $sth = $dbh->prepare($SQLQuery)
>> >                 or die "Can't prepare SQL statement:
>> $DBI::errstr\n";
>> >         print "Executing $SQLQuery\n";
>> >         $sth->execute()
>> >                 or die "Can't execute SQL statement:
>> $DBI::errstr\n";;
>>
>>    # Print column names
>>    # @col will be used for column values later.
>>    my @col = @{$sth -> {NAME}};
>>    print OUTFILE join( $d, @col ), "\n";
>>    $sth -> bind_columns( \( @col[0 .. $#col] ) );
>>
>>    # Print column values
>>    my $rows = 0;
>>    while ( $sth -> fetch ) {
>>       # At this point @col has the column values
>>       # "" might be better than NULL to indicate NULL columns.
>>       # Some column values may require quoting (e.g., ',').
>>       foreach ( @col ) { $_ = "NULL" if ! defined $_; }
>>       print OUTFILE join( $d, @col ), "\n";
>>       ++$rows;
>>    }
>>    print "   Wrote $rows to $outfile\n";
>>
>>    # $sth->rows is not guaranteed to be correct until after all rows #
>>   have been fetched.
>>
>> #>         print " which has ".$sth->rows." lines\n";
>> #>         while($row = $sth->fetchrow_arrayref) {
>> #>                 my $line = '';
>> #>                 my $found = 0;
>> #>                 foreach my $field(@$row) {
>> #>                         if($found > 0) { $line = $line.$d; }
>> #>                         if($field) {
>> #>                                 $line = $line.$field;
>> #>                         }else{
>> #>                                 $line = $line."NULL";
>> #>                         }
>> #>                         $found = 1;
>> #>                 }
>> #>                 print OUTFILE $line."\n";
>> #>         }
>> >         warn "Data fetching terminated early by error:
>> $DBI::errstr\n" if
>> > $DBI::err;
>> >         close(OUTFILE);
>>
>>   # finish() is not appropriate if all rows are always fetched,
>>   #  barring errors.
>> #>         $sth->finish;
>>
>> > }
>> >
>> >
>> >
>> > # Loop through command line arguments
>>
>>    @ARGV = ( $defaulthost ) if ! @ARGV;
>>    my $host;
>>    foreach $host ( @ARGV ) {
>>       BackupHost( $host );
>>    }
>>
>> #> if(scalar(@ARGV) > 0) {
>> #>         for ($i = 0;$i < scalar(@ARGV);$i++) {
>> #>                 $host = $ARGV[$i];
>> #>                 # backup each host
>> #>                 BackupHost($host);
>> #>         }
>> #> }else{
>> #>         # backup default host
>> #>         BackupHost($defaulthost);
>> #> }
>>
>> --
>> Mac :})
>> Give a hobbit a fish and he eats fish for a day.
>> Give a hobbit a ring and he eats fish for an age.
In linux you might also need to check the 'ulimits' which are user limits
which can be set below the OS limits.  However, these limits all relate to
files not time.  However, if its a shared box there might be a maxduration
limit set somehow.

William R. Mussatto, Senior Systems Engineer
Ph. 909-920-9154 ext. 27
FAX. 909-608-7061


Reply via email to