On 8 December 2016 at 07:36, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Likewise, the need for clients to be able to transfer data in chunks
> gets pressing well before you get to 1GB.  So there's a lot here that
> really should be worked on before we try to surmount that barrier.

Yeah. I tend to agree with Tom here. Allowing >1GB varlena-like
objects, when we can barely cope with our existing ones in
dump/restore, in clients, etc, doesn't strike me as quite the right
direction to go in.

I understand it solves a specific, niche case you're dealing with when
exchanging big blobs of data with a GPGPU. But since the client
doesn't actually see that large blob, it's split up into objects that
will work on the current protocol and interfaces, why is is necessary
to have instances of a single data type with >1GB values, rather than
take a TOAST-like / pg_largeobject-like approach and split it up for
storage?

I'm concerned that this adds a special case format that will create
maintenance burden and pain down the track, and it won't help with
pain points users face like errors dumping/restoring rows with big
varlena objects, problems efficiently exchanging them on the wire
protocol, etc.

-- 
 Craig Ringer                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to