I believe that what haproxy now does with connections when resources are
all used up is comparable to a router's tail-drop congestion control --
when servers and queue is full, connections are refused (dropped).

I was thinking that moving to something more like RED would be more fair.
 Especially if the algorithm could be tuned by the site operator.  I would
think there would be high and low watermarks that would activate
and deactivate it and other tunable parameters to guide the selection of
which connections should be dropped: random, transfer rate, include /
exclude backend, acl, etc.

So for example, let's assume there are 100 max connections allowed for a
backend and typically 30 of them are in use. The configuration might be
something like: if current connections > 110% of max (connections are
queued), random drop connections with "active transfer" > 5 sec and
transfer rate < 1KBps.  Here, I think "active transfer" would mean that
headers have been received and POST (PUT, etc) content is being sent very
slowly or that response headers have been received but that reading the
body is happening very slowly.  It also seems like there would need to be a
limit to how long a client has to read response headers much like there is
for a client to send request headers.

RED isn't perfect and has a few known issues as I'm sure you're aware.
 Maybe there is another congestion control that haproxy could use that is
better than tail drop and RED.

-Bryan



On Tue, Oct 2, 2012 at 11:58 PM, Willy Tarreau <w...@1wt.eu> wrote:

> On Tue, Oct 02, 2012 at 11:42:27PM -0700, Bryan Talbot wrote:
> > Having one (or some small number) of slow readers isn't a problem.  The
> > problem comes up when some significant percentage of your requests are
> from
> > slow readers.  Those readers might be from the same client or distributed
> > which seems pretty common from the attacks we see.
> >
> > Would it be possible for haproxy to start dropping connections to slow
> > readers if some percentage of them compared to capacity (of the frontend,
> > backend, etc) were slow?  If the slowness is due to network congestion,
> > then those clients are already having connection issues anyway.
>
> It's not necessarily true unfortunately. And if you want to be kind with
> your visitors, you'd better kill the fast connections because in one click
> they can get the same object again, than killing the ones who'll need
> another
> hour of download (and of connection usage on your side BTW).
>
> If you have some captures of such slow readers that are attacking you, I
> would be interested, I think they can be very useful. One significant
> issue to deal with is that the system's network buffers hide the slow
> ACKs to the application layer, so you can't easily detect that someone
> is reading one byte at a time, and distinguishing someone who does this
> from someone experiencing repeated retransmits (typically a smartphone)
> is very hard.
>
> What if your server is saturated serving smartphones ? You don't
> necessarily want to kill many transfers, it will only make things
> worse because they'll be there again after a retry.
>
> Probably that we should try to consider the transmission quality over
> the whole transfer, which might be similar to the transfer rate after
> all. The difference between the slow reader and other readers is that
> the slow reader reads *very* slowly in order to have its connection
> last as long as possible. The average send() size as well as the average
> time between two send() calls might be a good indication of what is
> happening.
>
> But even then, you see, the attack can change a bit. Imagine for a
> second that the attacker connects to your site, waits one second,
> sends "HEAD /favicon.ico", waits one second and does it again over
> the same connection. It's just one tiny request per second to keep
> the connection alive. Should we block them or let them pass ? Very
> hard to tell. This is a perfectly valid transfer pattern, but if
> abused, it becomes an attack. The difference is that the connection
> to the backend server will be very ephemeral with haproxy, but doing
> this with a normal server is as painful as reading one small segment
> at a time.
>
> Willy
>
>

Reply via email to