On 22.12.2020 22:03, Tom Lane wrote:
Tomas Vondra <tomas.von...@enterprisedb.com> writes:
I don't see aby benchmark results in this thread, allowing me to make
that conclusion, and I find it hard to believe that 200MB/client is a
sensible trade-off.
It assumes you have that much memory, and it may allow easy DoS attack
(although maybe it's not worse than e.g. generating a lot of I/O or
running expensive function). Maybe allowing limiting the compression
level / decompression buffer size in postgresql.conf would be enough. Or
maybe allow disabling such compression algorithms altogether.
The link Ken pointed at suggests that restricting the window size to
8MB is a common compromise.  It's not clear to me what that does to
the achievable compression ratio.  Even 8MB could be an annoying cost
if it's being paid per-process, on both the server and client sides.

                        regards, tom lane
Please notice that my original intention was to not give to a user (client) possibility to choose compression algorithm and compression level at all. All my previous experiments demonstrate that using compression level larger than default only significantly decrease speed, but not compression
ratio.  Especially for compressing of protocol messages.
Moreover, on some dummy data (like generated by pgbench) zstd with default compression level (1) shows better compression ratio
than with higher levels.

I have to add possibility to specify compression level and suggested compression algorithms because it was requested by reviewers. But I still think that it was wrong idea and this results just prove prove it.
More flexibility is not always good...

Now there is a discussion concerning a way to switch compression algorithm on the fly (particular case: toggling compression for individual ibpq messages). IMHO it is once again excessive flexibility which just increase complexity and gives nothing good in practice).






Reply via email to