Philip Warner wrote:
> At 17:56 30/06/00 +0200, Jan Wieck wrote:
> >
> >For gods sake they don't have. And I'm uncertain that it
> >should ever work.
>
> Sorry...I'm the one to blame for the suggestion. My only defense is it was
> late, and I was misled by the parser...never the less.
Tom Lane wrote:
> COPY uses a streaming style of output. To generate INSERT commands,
> pg_dump first does a "SELECT * FROM table", and that runs into libpq's
> suck-the-whole-result-set-into-memory behavior. See nearby thread
> titled "Large Tables(>1 Gb)".
Hmm, any reason why pg_dump couldn't
At 23:34 1/07/00 +1000, Martijn van Oosterhout wrote:
>
>> Philip Warner needs alpha testers for his new version of pg_dump ;-).
>> Unfortunately I think he's only been talking about it on pghackers
>> so far.
>
>What versions does it work on?
>
6.5.x and 7.0.x.
Which version are you running?
If anybody out there is comfortable with using make, has the PG sources on
their machine, and is interested in testing a new version of pg_dump,
please let me know.
Details of the revised pg_dump, and new pg_restore are at the end of this
message.
Thanks,
Philip Warner.
-
The basi
This is also available via ftp from:
ftp://ftp.rhyme.com.au/pub/pg_backup_110.tar.gz
Philip Warner| __---_
Albatross Consulting Pty. Ltd. |/ - \
(A.C.N. 008 659 498) |
At 11:33 1/07/00 +0200, Jan Wieck wrote:
>
>Was late for me too, and maybe the answer was too lazy. So
>let me give you an example of what I meant:
>
About 5 mins after I hit the send button on my last message I realized the
error in my ways (again). There are probably limitations one