Tom Lane wrote:
Andreas Pflug <[EMAIL PROTECTED]> writes:

The attached patch implements COPY ... WITH [BINARY] COMPRESSION (compression implies BINARY). The copy data uses bit 17 of the flag field to identify compressed data.


I think this is a pretty horrid idea, because it changes pg_lzcompress
from an unimportant implementation detail into a backup file format
that we have to support till the end of time.  What happens if, say,
we need to abandon pg_lzcompress because we find out it has patent
problems?

It *might* be tolerable if we used gzip instead,

I used pg_lzcompress because it's present in the backend. I'm fine with every other good compression algorithm.

 but I really don't see
the argument for doing this inside the server at all: piping to gzip
seems like a perfectly acceptable solution,

As I said, this hits only if it is possible to pipe the result into gzip in a performant way. The issue already arises if psql or any other COPY client (slony, pg_dump) is not on the same machine: Network bandwidth will limit throughput.

quite possibly with higher
performance than doing it all in a single process (which isn't going
to be able to use more than one CPU).

Which is pretty normal for pgsql.

I don't see the argument for restricting it to binary only, either.

That's not a restriction, but a result: compressed data is binary. Marking it as binary will make it working with older frontends as well, as long as they don't try to interpret the data. Actually, all 8.x psql versions should work (with COPY STDxx, not \copy).

Do you have a comment about the progress notification and its impact on copy to stdout?

Regards,
Andreas

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to