Am 04.03.19 um 14:36 schrieb Willy Tarreau:
>>> can document such limits and let users decide on their own. We'll
>>> need the equivalent of maxzlibmem though (or better, we can reuse it
>>> to keep a single tunable and indicate it serves for any compression
>>> algo so that there isn't the issue of "what if two frontends use a
>>> different compression algo").
>> I guess one has to plug in a custom allocator to do this.
> Quite likely, which is another level of pain :-/
>> The library
>> appears to handle the OOM case (but I did not check what happens if the
>> OOM is encountered halfway through compression).
> The problem is not as much how the lib handles the OOM situation as
> how 100% of the remaining code (haproxy,openssl,pcre,...) handles it
> once brotli makes this possibility a reality. We're always extremely
> careful to make sure it still work in this situation by serializing
> what can be, but we've already been hit by bugs in openssl and haproxy
> at least.
> For now, until we figure a way to properly control the resource usage
> of this lib, I'm not a big fan of merging it, as it's clear that it
> *will* cause lots of trouble. Seeing users complain here on the list
> is one thing, but thinking about their crashed or frozen LB in prod
> is another one, and I'd rather not cross this boundary especially
> given the small gains we've seen that very few people would take for
> a valuable justification for killing their production :-/

One could limit the overall brotli resource usage by returning NULLs in
the custom allocator when the *total* (versus the per-stream) brotli
memory consumption exceeds a certain level. The handling of OOMs in the
remaining code is not relevant then, because brotli is artificially
limited to a (way) lower memory limit that leaves space for other parts.

Best regards
Tim Düsterhus

Reply via email to