On 12/22/20 8:03 PM, Tom Lane wrote:
Tomas Vondra <tomas.von...@enterprisedb.com> writes:
I don't see aby benchmark results in this thread, allowing me to make
that conclusion, and I find it hard to believe that 200MB/client is a
sensible trade-off.

It assumes you have that much memory, and it may allow easy DoS attack
(although maybe it's not worse than e.g. generating a lot of I/O or
running expensive function). Maybe allowing limiting the compression
level / decompression buffer size in postgresql.conf would be enough. Or
maybe allow disabling such compression algorithms altogether.

The link Ken pointed at suggests that restricting the window size to
8MB is a common compromise.  It's not clear to me what that does to
the achievable compression ratio.  Even 8MB could be an annoying cost
if it's being paid per-process, on both the server and client sides.


Possibly, but my understanding is that's merely a recommendation for the decoder library (e.g. libzstd), and it's not clear to me if/how that relates to the compression level or how to influence it.

From the results shared by Daniil, the per-client overhead seems way higher than 8MB, so either libzstd does not respect this recommendation or maybe there's something else going on.

regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Reply via email to