From time to time we had complains about slow dump of large tables with
bytea columns, people often complaining about a) size and b) duration of the dump.

That latter occurred recently to me, a customer would like to dump large tables (approx. 12G in size) with pg_dump, but he was annoyed about the performance. Using COPY BINARY reduced the time (unsurprisingly) to a fraction (from 12 minutes to 3 minutes).

As discussed in the past[1], we didn't implement pg_dump to support BINARY to preserve portability and version independence of dumps using pg_dump. I would like to bring that topic up again, since implementing an option like --binary-copy seems interesting in use cases, where portability and version issues doesn't matter and someone wants to have a fast COPY of his documents . This would make this task much easier, especially in the described case, where the customer has to dump referenced tables as well.

Another approach would be to just dump bytea columns in binary format only (not sure how doable that is, though).

Opinions, again?


[1] <http://archives.postgresql.org//pgsql-hackers/2007-12/msg00139.php>
--
 Thanks

                   Bernd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to