On Tue, Apr 07, 2026 at 10:52:54AM +0200, Willy Tarreau wrote:
> Hi Greg,
> 
> On Tue, Apr 07, 2026 at 09:48:14AM +0200, Greg Kroah-Hartman wrote:
> > slz_encode() has no output buffer bound parameter; it writes
> > unconditionally to strm->outbuf. The function comment claims "output
> > result may be up to 5 bytes larger than the input", but this only
> > holds when the bit9>=52 heuristic at slz.c:634 forces a switch to
> > stored blocks. An attacker can defeat the heuristic by interleaving
> > short (4-byte) matches between runs of <= 51 bytes >= 144, which are
> > encoded with 9-bit fixed Huffman codes.
> > 
> > With this pattern, ~51 9-bit literals + one ~16-bit match per 55
> > input bytes = ~8.6 bits/byte. Pure literals >= 144 with no matches
> > expand by 12.5%. The comp_http_payload input cap was b_size(&trash)
> > with no headroom, so a 16336-byte HTX DATA block of crafted data
> > expands to ~17600+ bytes into a 16384-byte trash buffer.
> 
> I'm still trying to figure if/how this one can really happen, because
> if so, it's a libslz problem which doesn't stand by its promise. In
> this case I'd rather not change the haproxy code to work around the
> limit and instead try to fix libslz so that it respects its contract.
> 
> I must say I'm a bit surprised, considering that many terabytes have
> passed through it (even my backups pass through it) and that even
> without reaching the point of crashing, it should at least result in
> data corruption that is detected when trying to decompress.
> 
> I'm trying now to follow the code with the description above in mind,
> however if you or the tool you relied on are able to propose such a
> content, it will immensely help me.

I'll send this to you off-list in a bit...


Reply via email to