> What is the fastest way to upgrade postgres for large databases that
> has binary objects?

Your procedure dumps and restore the databases twice. This seems less
than sound. My prediction is that you could get a 50% speed improvement
by fixing that ...

Thanks for the response. This'd be wonderful if I can get my process right.
My assumptions (probably incorrect) are that pgdump has to be excuted twice
on a database with blobs. Once to get the  data and once to get the blob
(using the -b flag).

The only thing you really need pg_dumpall for is the global tables. I
would just use pg_dumpall -g to get those, and then use pg_dump -F c  +
pg_restore for each actual database.

This makes sense :) I assume that running pg_dump with -b will get all of
the data including the blobs?

Another thing is to make sure that pg_dump/pg_restore are not competing
with postgres for access to the same disk(s). One way to do that is to
run them from a different machine - they don't have to be run on the
server machine - of course then the network can become a bottleneck, so

We are using separate servers for dump and restore.

Thanks again for your  suggestions. This helps immensely.

Reply via email to