On 8 Apr 2016 9:14 pm, "Pavel Stehule" <pavel.steh...@gmail.com> wrote:
> 2016-04-08 20:54 GMT+02:00 Andrew Dunstan <and...@dunslane.net>:
>> I should add that I've been thinking about this some more, and that I now 
>> agree that something should be done to support this at the SQL level, mainly 
>> so that clients can manage very large pieces of data in a stream-oriented 
>> fashion rather than having to marshall the data in memory to load/unload via 
>> INSERT/SELECT. Anything that is client-side only is likely to have this 
>> memory issue.
>>
>> At the same time I'm still not entirely convinced that COPY is a good 
>> vehicle for this. It's designed for bulk records, and already quite complex. 
>> Maybe we need something new that uses the COPY protocol but is more 
>> specifically tailored for loading or sending large singleton pieces of data.
>
>
> Now it is little bit more time to think more about. But It is hard to design 
> some more simpler than is COPY syntax. What will support both directions.

Sorry for arriving late and adding to the bikeshedding. Maybe the
answer is to make COPY pluggable. It seems to me that it would be
relatively straightforward to add an extension mechanism for copy
output and input plugins that could support any format expressible as
a binary stream. Raw output would then be an almost trivial plugin.
Others could implement JSON, protocol buffers, Redis bulk load, BSON,
ASN.1 or whatever else serialisation format du jour. It will still
have the same backwards compatibility issues as adding the raw output,
but the payoff is greater.

Regards,
Ants Aasma


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to