Re: [ZODB-Dev] RelStorage, PostgreSQL backup method comparison

2012-12-27 Thread Shane Hathaway

On 12/26/2012 10:43 AM, Sean Upton wrote:

For cron job RelStorage backups (databse not including blobs backed-up
seperately, using a PostgreSQL 9.0.x backend), I use both zodbconvert
to save FileStorage copies of my database, and pgdump for low-level
binary dumps (pg_restore custom format, preserving postgres OIDs).
bzip2-compressed, the pgdump backups are always ~2.5 times the size
versus the compressed FileStorage -- this puzzles me.

I'm using something in my bash backup script that looks like:

   $PGDUMP -Fc -o -h $SOCKET_HOST -p $SOCKET_PORT $dbname | bzip2 -c -

$DESTDIR/pgdump-$dbname-$DATESTAMP.bz2


One database that backs up to 45MB bz2-compressed FileStorage file
equates to a 123MB bz2-compressed pgdump custom-format file.  I would
expect such a ratio in running size, but not in compressed backups of
similar data.

Generally, I'm wondering, for the same data, what it is that makes my
high-level FileStorage dump so much smaller in comparison to the
lower-level pgdump alternative?  Anyone with hunches or PostgreSQL
kung-fu to add insight?


My guess is the Postgres blob backup format is inefficient.  Also, you 
are probably backing up the tables used for packing.  You might want to 
use the -T option to exclude the pack tables, but then you'll have to 
create empty pack tables when restoring.



Side-note: the zodbconvert script seems a perfectly viable mechanism
for ZODB backup (regardless of whether one uses a RelStorage backend),
but I am not sure if anyone else does this.


I agree that zodbconvert is a good way to back up the database, although 
it might not be as fast as pg_dump.


Shane

___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev


[ZODB-Dev] RelStorage, PostgreSQL backup method comparison

2012-12-26 Thread Sean Upton
For cron job RelStorage backups (databse not including blobs backed-up
seperately, using a PostgreSQL 9.0.x backend), I use both zodbconvert
to save FileStorage copies of my database, and pgdump for low-level
binary dumps (pg_restore custom format, preserving postgres OIDs).
bzip2-compressed, the pgdump backups are always ~2.5 times the size
versus the compressed FileStorage -- this puzzles me.

I'm using something in my bash backup script that looks like:

  $PGDUMP -Fc -o -h $SOCKET_HOST -p $SOCKET_PORT $dbname | bzip2 -c -
> $DESTDIR/pgdump-$dbname-$DATESTAMP.bz2

One database that backs up to 45MB bz2-compressed FileStorage file
equates to a 123MB bz2-compressed pgdump custom-format file.  I would
expect such a ratio in running size, but not in compressed backups of
similar data.

Generally, I'm wondering, for the same data, what it is that makes my
high-level FileStorage dump so much smaller in comparison to the
lower-level pgdump alternative?  Anyone with hunches or PostgreSQL
kung-fu to add insight?

Side-note: the zodbconvert script seems a perfectly viable mechanism
for ZODB backup (regardless of whether one uses a RelStorage backend),
but I am not sure if anyone else does this.

Sean
___
For more information about ZODB, see http://zodb.org/

ZODB-Dev mailing list  -  ZODB-Dev@zope.org
https://mail.zope.org/mailman/listinfo/zodb-dev