Philip Warner wrote:
At 23:34 1/07/00 +1000, Martijn van Oosterhout wrote:
Philip Warner needs alpha testers for his new version of pg_dump ;-).
Unfortunately I think he's only been talking about it on pghackers
so far.
What versions does it work on?
6.5.x and 7.0.x.
Which
Tom Lane wrote:
COPY uses a streaming style of output. To generate INSERT commands,
pg_dump first does a "SELECT * FROM table", and that runs into libpq's
suck-the-whole-result-set-into-memory behavior. See nearby thread
titled "Large Tables(1 Gb)".
Hmm, any reason why pg_dump couldn't do
At 23:34 1/07/00 +1000, Martijn van Oosterhout wrote:
Philip Warner needs alpha testers for his new version of pg_dump ;-).
Unfortunately I think he's only been talking about it on pghackers
so far.
What versions does it work on?
6.5.x and 7.0.x.
Which version are you running?
pg_dumping the DB takes over half an hour
(mainly because pg_dump chews all available memory). It would be nicer
if we knew that tarring it up would work also...
- Original Message -
From: "mikeo" [EMAIL PROTECTED]
Subject: [GENERAL] disk backups
hi,
would someone be so kind as
Martijn van Oosterhout [EMAIL PROTECTED] writes:
Is there a better way? Here pg_dumping the DB takes over half an hour
(mainly because pg_dump chews all available memory).
pg_dump shouldn't be a performance hog if you are using the default
COPY-based style of data export. I'd only expect