Hi,
I have never dealt with tables that are made
of compressed data, but I back up the database
via crontab file like this:
<some envrironment variable setup>
.
.
filename=`date +%G%m%d.%w`.gz
/usr/local/pgsql/bin/pg_dumpall | gzip > /some_destination/$filename
.
.
Hope this helps.
Tena Sakai
[EMAIL PROTECTED]
-----Original Message-----
From: [EMAIL PROTECTED] on behalf of Ryan Wells
Sent: Sat 4/12/2008 5:59 PM
To: Ryan Wells; [email protected]
Subject: [ADMIN] Slow pg_dump
We're having what seem like serious performance issues with pg_dump, and I hope
someone can help.
We have several tables that are used to store binary data as bytea (in this
example image files), but we're having similar time issues with text tables as
well.
In my most recent test, the sample table was about 5 GB in 1644 rows, with
image files sizes between 1 MB and 35 MB. The server was a 3.0 GHz P4 running
WinXP, with 2 GB of ram, the backup stored to a separate disk from the data,
and little else running on the sytem.
We're doing the following:
pg_dump -i -h localhost -p 5432 -U postgres -F c -v -f "backupTest.backup" -t
"public"."images" db_name
In the test above, this took 1hr 45min to complete. Since we expect to have
users with 50-100GB of data, if not more, backup times that take nearly an
entire day are unacceptable.
We think there must be something we're doing wrong. A search turned up a
similar thread
(http://archives.postgresql.org/pgsql-performance/2007-12/msg00404.php), but
our number are so much higher than those that we must be doing something very
wrong. Hopefully, either there's a server setting or pg_dump option we need to
change, but we're open to design changes if necessary.
Can anyone who has dealt with this before advise us?
Thanks!
Ryan