On 24 March 2016 at 01:14, Daniel Verite <dan...@manitou-mail.org> wrote:


>
> It provides a useful mitigation to dump/reload databases having
> rows in the 1GB-2GB range, but it works under these limitations:
>
> - no single field has a text representation exceeding 1GB.
> - no row as text exceeds 2GB (\copy from fails beyond that. AFAICS we
>   could push this to 4GB with limited changes to libpq, by
>   interpreting the Int32 field in the CopyData message as unsigned).


This seems like worthwhile mitigation for an issue multiple people have hit
in the wild, and more will.

Giving Pg more generally graceful handling for big individual datums is
going to be a bit of work, though. Support for wide-row, big-Datum COPY in
and out. Efficient lazy fetching of large TOASTed data by follow-up client
requests. Range fetching of large compressed TOASTed values (possibly at
the price of worse compression) without having to decompress the whole
thing up to the start of the desired range. Lots of fun.

At least we have lob / pg_largeobject.

-- 
 Craig Ringer                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Reply via email to