On 7/3/2014 5:13 PM, Bosco Rama wrote:
If you use gzip you will be doing the same 'possibly unnecessary' compression step. Use a similar approach to the gzip command as you would for the pg_dump command. That is, use one if the -[0-9] options, like this: $ pg_dump -Z0 -Fc ... | gzip -[0-9] ...

Bosco, maybe you can recommend a different approach. I pretty much run daily backups that I only have for disaster recovery. I generally don't do partials recoveries, so I doubt I'd ever modify the dump output. I just re-read the docs about formats, and it's not clear what I'd be best off with, and "plain" is the default, but it doesn't say it can be used with pg_restore.

Maybe the --format=c isn't the fastest option for me, and I'm less sure about the compression. I do want to be able to restore using pg_restore (unless plain is the best route, in which case, how do I restore that type of backup?), and I need to include large objects (--oids), but otherwise, I'm mostly interested in it being as quick as possible.

Many of the large objects are gzip compressed when stored. Would I be better off letting PG do its compression and remove gzip, or turn off all PG compression and use gzip? Or perhaps use neither if my large objects, which take up the bulk of the database, are already compressed?



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to