Hi again,

small update on this below.

On Tue, Aug 13, 2019 at 10:50:46PM +0200, Willy Tarreau wrote:
> Hi Aleks,
> On Tue, Aug 13, 2019 at 07:02:49PM +0000, Aleksandar Lazic wrote:
> > Have anyone seen this and maybe some information is haproxy vulnerable
> > against this attacks?
> > 
> > https://github.com/Netflix/security-bulletins/blob/master/advisories/third-party/2019-002.md
> Yes I had a look after being kindly notified by Piotr. In practice almost
> all of them are irrelevant in our case, either because the processing cost
> is trivial (e.g. RST_STREAM, PING, SETTINGS, empty frames...) or because
> we don't have the required feature (priority, push_promise). The one I'd
> like to have a deeper look at is the 1-byte window increments, which may
> result in several streams to be woken up just to write one byte and wait
> for the next update. It should even have a higher impact on 1.9+ than on
> 1.8, but nothing alarming in my opinion. We could easily mitigate this by
> waiting for at least a part of the pending data to be refilled before
> waking the streams up.

So I checked between 1.8 and 2.1-dev today and the result is that we're
not impacted by these issues (checked in code review and tests). I found
that the code was nicely cleaned up between 1.8 and 2.1 because I was
first worried about the apparent missing 0-length check on the headers
frame in 1.9+ then I figured I had already centralized these checks for
safer control :-)

I ran a few performance tests using some of these types of attacks and
as expected my laptop swallows multi-gigabit/s of such traffic on a
single core. And while looking into this I discovered we had a sub-
optimization of window increment processing which can cause some minor
unfairness between several streams if the window grows in small steps
(typically what can happen if a client sends a connection window update
for each received frame). I found that improving this point resulted in
doubling our capacity to swallow WINDOW_UPDATE frames, jumping from 30
million frames per second to 65 million per second because we don't try
to make some streams prepare DATA frames if others are already blocked.
So I'm going to merge this into -dev once cleaned up (and don't expect
to backport these since I'm not seeing any real world case where it

I also have some slowly ongoing work on a trace mechanism that is still
rudimentary but proved very useful here, that I hope to finish soon,
and that may possibly end up being backported if we find that it speeds
up bug chasing and resolving interoperability issues at a low level.
More on this later.


Reply via email to