Am 27.02.19 um 05:12 schrieb Willy Tarreau:
> Hi Tim,
> On Tue, Feb 26, 2019 at 06:16:12PM +0100, Tim Düsterhus wrote:
>> Willy,
>> Am 13.02.19 um 17:57 schrieb Tim Duesterhus:
>>> *snip*
>> Are you able to give some (first, basic) feedback on this patch already?
> Not yet. In fact I don't know much what to think about it. The patch
> itself is reasonably small, but what's the real cost of using this ?
> We've created libslz because zlib was not practically usable due to
> its insane amount of memory per stream (256 kB), which resulted in
> compression being disabled for many streams by lack of memory, hence
> in a lower overall compression ratio. Here I have no idea how much
> brotli requires, but if it compresses each stream slightly better than
> zlib but at roughly similar (or worse) costs, maybe in the end it will
> not be beneficial either ? So if you have numbers (CPU cost, pinned
> memory per stream), it would be very useful. Once this is known, we

As mentioned in my reply to Aleks I don't have any numbers, because I
don't know to get them. My knowledge of both HAProxy's internals and C
is not strong enough to get those.

The manpage documents this:

>               Recommended input block size. Encoder may reduce this value,
>               e.g. if input is much smaller than input block size.
>        Range is from BROTLI_MIN_INPUT_BLOCK_BITS to
>        Note:
>            Bigger input block size allows better compression, but consumes
>            more memory.
>             The rough formula of memory used for temporary input storage is 3
>            << lgBlock.

The default of this value depends on other configuration settings:

It is the only place that talks about memory. There's also this (still
open) issue: https://github.com/google/brotli/issues/389 "Functions to
calculate approximate memory usage needed for compression and

> can document such limits and let users decide on their own. We'll
> need the equivalent of maxzlibmem though (or better, we can reuse it
> to keep a single tunable and indicate it serves for any compression
> algo so that there isn't the issue of "what if two frontends use a
> different compression algo").

I guess one has to plug in a custom allocator to do this. The library
appears to handle the OOM case (but I did not check what happens if the
OOM is encountered halfway through compression).

Best regards
Tim Düsterhus

Reply via email to