Eric Crist wrote:

First off, I don't care if you send example in perl, php, or sh, but we're not a python shop here, so those recommendation will not be useful...

I'm trying to write a shell script that scans our databases for tables starting with archive_ which are created by other scripts/departments, etc. This script needs to perform a mysqldump of that table, and then gzip it. It's MUCH quick to pipe directly to gzip, than perform the dump, then gzip that. The problem is, this table to filesystem dump is also going to drop those archive_* tables. We would like to know that the mysqldump worked before we do this. The problem we're having, as I'm sure others have run into (at least according to Google), is that a command such as the following leaves no apparent easy way to capture the exit status of the mysqldump command:

# mysqldump -u $USER -p$PASS $DBHOST $DATABASE $TABLE | gzip > $TABLE.sql.gz

This rough perl should do the trick:

   open (MYSQL, "/tmp/fail|") || die "mysqldump failed";
   open (GZIP, "|gzip  > /tmp/test.gz") || die "gzip failed";
   while ($ret = read MYSQL, $buf, 4096) {
      print GZIP $buf || die "gzip write failed: $!";
   die "gzip write failed: $!"
       if (!defined($ret));

   close(MYSQL) || die "mysql close failed";
   close (GZIP) || die "gzip close failed";

and for testing /tmp/fail was executable and contained:

   #!/bin/sh -

cat /etc/motd /etc/motd /etc/motd /etc/motd /etc/motd /etc/motd /etc/motd /etc/motd
   exit 1

With exit 1, perl dies on the MYSQL close, with exit 0 perl exits normally.

If this works for you then /tmp/fail gets replaced with your mysqldump command and you'll need some params to pass in the name of the gzipped file.

You can play with sysread instead of read, and vary the buffer size. No idea how it compares for speed to straight shell piping to gzip.



_______________________________________________ mailing list
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to