> 6 нояб. 2020 г., в 00:22, Peter Eisentraut 
> <peter.eisentr...@enterprisedb.com> написал(а):
> 
> On 2020-11-02 20:50, Andres Freund wrote:
>> On 2020-10-31 22:25:36 +0500, Andrey Borodin wrote:
>>> But the price of compression is 1 cpu for 500MB/s (zstd). With a
>>> 20Gbps network adapters cost of recompressing all traffic is at most
>>> ~4 cores.
>> It's not quite that simple, because presumably each connection is going
>> to be handled by one core at a time in the pooler. So it's easy to slow
>> down peak throughput if you also have to deal with TLS etc.
> 
> Also, current deployments of connection poolers use rather small machine 
> sizes.  Telling users you need 4 more cores per instance now to decompress 
> and recompress all the traffic doesn't seem very attractive. Also, it's not 
> unheard of to have more than one layer of connection pooling.  With that, 
> this whole design sounds a bit like a heat-generation system. ;-)

User should ensure good bandwidth between pooler and DB. At least they must be 
within one availability zone. This makes compression between pooler and DB 
unnecessary.
Cross-datacenter traffic is many times more expensive.

I agree that switching between compression levels (including turning it off) 
seems like nice feature. But
1. Scope of its usefulness is an order of magnitude smaller than compression of 
the whole connection.
2. Protocol for this feature is significantly more complicated.
3. Restarted compression is much less efficient and effective.

Can we design a protocol so that this feature may be implemented in future, 
currently focusing on getting things compressed? Are there any drawbacks in 
this approach?

Best regards, Andrey Borodin.

Reply via email to