On Fri, May 20, 2022 at 12:16:07PM +0100, Mark Zealey wrote:
> Thanks, we may use this for a very rough proof-of-concept. However we are
> dealing with millions of concurrent connections, 10-100 million connections
> per day, so we'd prefer to pay someone to develop (+ test!) something for
> haproxy which will work at this scale

That's a big problem with gzip. While compression can be done stateless
(which is what we're doing with SLZ), decompression uses roughly 256 kB
of permanent RAM per *stream*. That's 256 GB of RAM for just one million
connections, 1 TB of RAM for just 4 million connections. Sadly, that's a
perfect example of use case that requires extreme horizontal scalability
and that's better kept away from the LB and performed on servers.

Isn't there any way to advertise support or lack of for compression ?
Because if your real need is to extract contents to perform any form of
processing, we could imagine that instead of having to decompress the
stream, it would be better if we could interfere with the connection
setup to disable compression.

I'm just trying to figure a reasonable alternative, because like this in
addition to being extremely unscalable, it makes your LBs a trivial DoS


Reply via email to