On Thu, Nov 27, 2014 at 11:59 PM, Michael Paquier
<michael.paqu...@gmail.com> wrote:
> On Thu, Nov 27, 2014 at 11:42 PM, Andres Freund <and...@2ndquadrant.com> 
> wrote:
>> One thing Heikki brought up somewhere, which I thought to be a good
>> point, was that it might be worthwile to forget about compressing FDWs
>> themselves, and instead compress entire records when they're large. I
>> think that might just end up being rather beneficial, both from a code
>> simplicity and from the achievable compression ratio.
> Indeed, that would be quite simple to do. Now determining an ideal cap
> value is tricky. We could always use a GUC switch to control that but
> that seems sensitive to set, still we could have a recommended value
> in the docs found after looking at some average record size using the
> regression tests.

Thinking more about that, it would be difficult to apply the
compression for all records because of the buffer that needs to be
pre-allocated for compression, or we would need to have each code path
creating a WAL record able to forecast the size of this record, and
then adapt the size of the buffer before entering a critical section.
Of course we could still apply this idea for records within a given
windows size.
Still, the FPW compression does not have those concerns. A buffer used
for compression is capped by BLCKSZ for a single block, and nblk *
BLCKSZ if blocks are grouped for compression.
Feel free to comment if I am missing smth obvious.
Regards,
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to