Re: Random values with inspect-delay possible ?

2014-09-10 Thread Willy Tarreau
On Wed, Sep 10, 2014 at 04:09:54PM +0200, bjun...@gmail.com wrote:
> 2014-09-04 14:33 GMT+02:00 bjun...@gmail.com :
> > Hi,
> >
> >
> > i'm using the following in a backend to rate-limit spider or bad
> > behavior clients:
> >
> >
> > backend be_spider
> >
> > tcp-request inspect-delay 2000ms
> > tcp-request content accept if WAIT_END
> >
> > server node01 192.168.1.10:80 maxconn {LOWVALUE}
> >
> >
> >
> > If now an abuser/spider/crawler is making many requests at the same
> > time/same second, all requests are delayed for  ms. But if the
> > delay is over, all requests are bursting anyway at the same point in
> > time.
> >
> >
> > What i want to do is to set the inspect-delay in a random fashion for
> > every request (for example in a range from 1000ms - 2000ms) to
> > distribute the requests over a timeframe and absorb immensive bursts.
> >
> >
> > The overall backend capacity is limited with a low maxconn value, but
> > i have to control bursts of requests also.
> >
> >
> > Is this possible or is there a different method to accomplish this ?
> >
> > ---
> > Bjoern
> 
> Hi,
> 
> if this is not possible, i would like to propose this as a feature (if
> this is a valid feature request).

No please, really don't implement such an ugly hack. As the name implies,
the inspect delay is a delay to inspect incoming request. It can be abused
to force a client to wait, but let's not cripple the main behaviour just to
improve the side effect.

Also, you forget the most likely case for such a usage : the client dies/is
killed by hand, which will still result in a burst of connection closures.

BTW, what problem do you have with closing many connections at once exactly ?

Also, it's not necesarily a good idea to slow down crawlers, because some
of them take the response time into account to rank your site... Maybe then
you'd rather reject them or return a 503 so that they retry later.

Willy




Re: Random values with inspect-delay possible ?

2014-09-10 Thread bjun...@gmail.com
2014-09-04 14:33 GMT+02:00 bjun...@gmail.com :
> Hi,
>
>
> i'm using the following in a backend to rate-limit spider or bad
> behavior clients:
>
>
> backend be_spider
>
> tcp-request inspect-delay 2000ms
> tcp-request content accept if WAIT_END
>
> server node01 192.168.1.10:80 maxconn {LOWVALUE}
>
>
>
> If now an abuser/spider/crawler is making many requests at the same
> time/same second, all requests are delayed for  ms. But if the
> delay is over, all requests are bursting anyway at the same point in
> time.
>
>
> What i want to do is to set the inspect-delay in a random fashion for
> every request (for example in a range from 1000ms - 2000ms) to
> distribute the requests over a timeframe and absorb immensive bursts.
>
>
> The overall backend capacity is limited with a low maxconn value, but
> i have to control bursts of requests also.
>
>
> Is this possible or is there a different method to accomplish this ?
>
> ---
> Bjoern

Hi,

if this is not possible, i would like to propose this as a feature (if
this is a valid feature request).


---
Bjoern



Random values with inspect-delay possible ?

2014-09-04 Thread bjun...@gmail.com
Hi,


i'm using the following in a backend to rate-limit spider or bad
behavior clients:


backend be_spider

tcp-request inspect-delay 2000ms
tcp-request content accept if WAIT_END

server node01 192.168.1.10:80 maxconn {LOWVALUE}



If now an abuser/spider/crawler is making many requests at the same
time/same second, all requests are delayed for  ms. But if the
delay is over, all requests are bursting anyway at the same point in
time.


What i want to do is to set the inspect-delay in a random fashion for
every request (for example in a range from 1000ms - 2000ms) to
distribute the requests over a timeframe and absorb immensive bursts.


The overall backend capacity is limited with a low maxconn value, but
i have to control bursts of requests also.


Is this possible or is there a different method to accomplish this ?

---
Bjoern