On Tuesday, March 25, 2014 02:56:48 PM Frank Foerster wrote:
> Hi,
> 
> we are currently in the process of upgrading a production/live 1 TB
> database from 9.2 to 9.3 via pg_dump, which is quite a lengthy process.
> 
> Fortunately we have a capable spare-server so we can restore into a clean,
> freshly setup machine.
> 
> I just wondered wether the intermediate step of writing the dump-file and
> re-reading it to have it written to the database is really necessary. Is
> there any way to "pipe" the dump-file directly into the new
> database-process or would such functionality make sense ?
> 
> I can only speak for us, but each time we do a dump/restore we need to
> extract/copy/move very large files and piping directly into something like
> psql/pg_restore on another machine etc. would greatly reduce
> upgrade-time/pain.
> 
> Thanks and best regards,
> 
> Frank

Sure. For maximum speed, something like:

pg_dump [options] source_db | pigz - | ssh -e none user@target "gunzip - | 
psql [options] target_db"

Depending on your hardware, though, doing a custom backup to a target file and 
then using it for a parallel restore would probably overall end up being 
faster, plus you get to keep the backup if needed. In my experience, the 
restore is a lot slower than the backup.

Slony is also great, to save most of the downtime. At the expense of a lot of 
setup and testing time.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to