Simon Riggs wrote:
On Sat, 2008-07-26 at 11:03 -0700, Joshua D. Drake wrote:
2. We have no concurrency which means, anyone with any database over 50G
has unacceptable restore times.
Agreed.
Sounds good.
Doesn't help with the main element of dump time: one table at a time to
one output file. We need a way to dump multiple tables concurrently,
ending in multiple files/filesystems.
Agreed but that is a problem I understand with a solution I don't. I am
all eyes on a way to fix that. One thought I had and please, be gentle
in response was some sort of async transaction capability. I know that
libpq has the ability to send async queries. Is it possible to do this:
send async(copy table to foo)
send async(copy table to bar)
send async(copy table to baz)
Where all three copies are happening in the background?
Sincerely,
Joshua D. Drake
--
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches