Re: Mitigating the Slowloris DoS attack

2009-06-29 Thread Jim Jagielski


On Jun 24, 2009, at 5:18 AM, Joe Orton wrote:


Regardless, the only thing I've ever wanted to see changed in the  
server

which would somewhat mitigate this type of attack is to have coarser
granularity on timeouts, e.g. per-request-read, rather than simply
per-IO-operation.


++1. Timeout would set universal defaults and we could then
have something like Timeout ReqRead 2 to provide further refinement.



Re: A modest proposal, was Re: Mitigating the Slowloris DoS attack

2009-06-29 Thread Jim Jagielski


On Jun 23, 2009, at 8:39 PM, Akins, Brian wrote:


On 6/23/09 12:48 AM, Paul Querna p...@querna.org wrote:


Mitagation is the wrong approach.

We all know our architecture is wrong.


Another heretical suggestion:

Lighttpd and nginx are both release under BSD-like licenses.

Hear me out.

I've actually been thinking how possible would it be to transform  
one of

them into httpd 3.0?


Most prob not that hard since Lighttpd is a fork of Apache 1.3.



Re: Mitigating the Slowloris DoS attack

2009-06-24 Thread Dirk-Willem van Gulik

Akins, Brian wrote:

On 6/22/09 10:40 PM, Weibin Yaonbubi...@gmail.com  wrote:


I have an idea to mitigate the problem: put the Nginx as a reverse proxy
server in the front of apache.


Or a device that effectively acts as such.

So what we did in the mid '90 when we where hit by pretty much the same 
was a bit simpler - any client which did not complete its headers within 
a a few seconds (or whatever a SLIP connection over a few k baud or so 
would need) was simply handed off by passing the file descriptor over a 
socket to a special single apache process. This one did a very single 
threaded async simple select() loop for all the laggards and would only 
pass it back to the main apache children once header reading was 
complete. This was later replaced by kernel accept filters.


Thanks,

Dw.


Re: Mitigating the Slowloris DoS attack

2009-06-24 Thread Graham Leggett
Dirk-Willem van Gulik wrote:

 So what we did in the mid '90 when we where hit by pretty much the same
 was a bit simpler - any client which did not complete its headers within
 a a few seconds (or whatever a SLIP connection over a few k baud or so
 would need) was simply handed off by passing the file descriptor over a
 socket to a special single apache process. This one did a very single
 threaded async simple select() loop for all the laggards and would only
 pass it back to the main apache children once header reading was
 complete. This was later replaced by kernel accept filters.

Are kernel accept filters widespread enough for it to be reasonably
considered a generic solution to the problem? If so, then the solution
to this problem is to just configure them correctly, and you're done.

Regards,
Graham
--


smime.p7s
Description: S/MIME Cryptographic Signature


Re: Mitigating the Slowloris DoS attack

2009-06-24 Thread Plüm, Rüdiger, VF-Group
 

 -Ursprüngliche Nachricht-
 Von: Graham Leggett 
 Gesendet: Mittwoch, 24. Juni 2009 10:05
 An: dev@httpd.apache.org
 Betreff: Re: Mitigating the Slowloris DoS attack
 
 Dirk-Willem van Gulik wrote:
 
  So what we did in the mid '90 when we where hit by pretty 
 much the same
  was a bit simpler - any client which did not complete its 
 headers within
  a a few seconds (or whatever a SLIP connection over a few k 
 baud or so
  would need) was simply handed off by passing the file 
 descriptor over a
  socket to a special single apache process. This one did a 
 very single
  threaded async simple select() loop for all the laggards 
 and would only
  pass it back to the main apache children once header reading was
  complete. This was later replaced by kernel accept filters.
 
 Are kernel accept filters widespread enough for it to be reasonably
 considered a generic solution to the problem? If so, then the solution
 to this problem is to just configure them correctly, and you're done.

The following issues remain:

1. You only have them on the BSD platforms
2. It doesn't help with SSL.
3. These kind of attacks can be also done in phases after the headers are
   read.

Curious question as I am not that familar with the accept filters:

Do they really wait with the handover of the socket until they read all headers?
I thought they only read the first line of the request before handing over
the socket to the app.

Regards

Rüdiger


Re: Mitigating the Slowloris DoS attack

2009-06-24 Thread Joe Orton
On Mon, Jun 22, 2009 at 09:48:46PM -0700, Paul Querna wrote:
 On Sun, Jun 21, 2009 at 4:10 AM, Andreas Krennmaira...@synflood.at wrote:
  Hello everyone,
 .
  The basic principle is that the timeout for new connections is adjusted
  according to the current load on the Apache instance: a load percentage is
  computed in the perform_idle_server_maintenance() routine and made available
  through the global scoreboard. Whenever the timeout is set, the current load
  percentage is taken into account. The result is that slowly sending
  connections are dropped due to a timeout, while legitimate, fast-sending
  connections are still being served. While this approach doesn't completely
  fix the issue, it mitigates the negative impact of the Slowloris attack.
 
 Mitagation is the wrong approach.
 
 We all know our architecture is wrong.

Meh.  There will always be a maximum to the number of concurrent 
connections a server can handle - be that hardware, kernel, or server 
design.  If you allow a single client to establish that number of 
connections it will deny service to other clients.

That is all that slowloris does, and you will always have to mitigate 
that kind of attack at network/router/firewall level.  It can be done 
today on Linux with a single trivial iptables rule, I'm sure the same is 
true of other kernels.

The only aspect of slowloris which claims to be novel is that it has 
low bandwidth footprint and no logging/detection footprint.  To the 
former, I'm not sure that the bandwidth footprint is significantly 
different from sending legitimate single-packet HTTP requests with 
single-packet responses; to the latter, it will have a very obvious 
footprint if you are monitoring the number of responses/minute your 
server is processing.

Regardless, the only thing I've ever wanted to see changed in the server 
which would somewhat mitigate this type of attack is to have coarser 
granularity on timeouts, e.g. per-request-read, rather than simply 
per-IO-operation.  (one of the few things 1.3 did better than 2.x, 
though the *way* it did it was horrible)

Regards, Joe


Re: Mitigating the Slowloris DoS attack

2009-06-24 Thread Matthieu Estrade
The problem could happen also if a CL is sent and not enough data is
posted. So i don't think control for complete headers will solve the
entire problem. I'm actually playing with dynamic timeout considering
time between request line and first header to adapt future timeout of
the socket, but it will remain a possible attack between request line
and first incomplete header. The second possible counter mesure is to
increment a per ip counter for waiting connexions (in connection filter,
inc before ap_get_brigade, dec after getting it). If there is too much
connexion at waiting state from the same ip, the ip is blacklisted. The
aim is to differentiate waiting connexion than working connexion. A
separate thread could also check a socket list to see when was the
latest data and kill it if there is too much waiting connexion from the
same ip... But all of this will add some lock and performances issues :(

Matthieu

Graham Leggett wrote:
 Dirk-Willem van Gulik wrote:
 
 So what we did in the mid '90 when we where hit by pretty much the same
 was a bit simpler - any client which did not complete its headers within
 a a few seconds (or whatever a SLIP connection over a few k baud or so
 would need) was simply handed off by passing the file descriptor over a
 socket to a special single apache process. This one did a very single
 threaded async simple select() loop for all the laggards and would only
 pass it back to the main apache children once header reading was
 complete. This was later replaced by kernel accept filters.
 
 Are kernel accept filters widespread enough for it to be reasonably
 considered a generic solution to the problem? If so, then the solution
 to this problem is to just configure them correctly, and you're done.
 
 Regards,
 Graham
 --



Re: Mitigating the Slowloris DoS attack

2009-06-24 Thread Matthieu Estrade
I totally agree with you.

This first point the lack of tunning of httpd.conf, this kind of attack
crash default setup of httpd.conf but a well setup server is harder to
kill, specially if you have decreased timeout. With 5 seconds as timeout
and a good tuning, slowloris fail...

More granular timeout and maybe adaptative timeout is also IMHO a good
way to improve resistance to this kind of attack. 300 seconds is too
much and maybe this value could be modified in default httpd. A POST
request with a body can have far more reason to be slowed because of the
amount of data and time it takes to transfer. a simple GET request
contains only headers and should be sent in one time, no need to wait
long time here...

Matthieu

Joe Orton wrote:
 On Mon, Jun 22, 2009 at 09:48:46PM -0700, Paul Querna wrote:
 On Sun, Jun 21, 2009 at 4:10 AM, Andreas Krennmaira...@synflood.at wrote:
 Hello everyone,
 .
 The basic principle is that the timeout for new connections is adjusted
 according to the current load on the Apache instance: a load percentage is
 computed in the perform_idle_server_maintenance() routine and made available
 through the global scoreboard. Whenever the timeout is set, the current load
 percentage is taken into account. The result is that slowly sending
 connections are dropped due to a timeout, while legitimate, fast-sending
 connections are still being served. While this approach doesn't completely
 fix the issue, it mitigates the negative impact of the Slowloris attack.
 Mitagation is the wrong approach.

 We all know our architecture is wrong.
 
 Meh.  There will always be a maximum to the number of concurrent 
 connections a server can handle - be that hardware, kernel, or server 
 design.  If you allow a single client to establish that number of 
 connections it will deny service to other clients.
 
 That is all that slowloris does, and you will always have to mitigate 
 that kind of attack at network/router/firewall level.  It can be done 
 today on Linux with a single trivial iptables rule, I'm sure the same is 
 true of other kernels.
 
 The only aspect of slowloris which claims to be novel is that it has 
 low bandwidth footprint and no logging/detection footprint.  To the 
 former, I'm not sure that the bandwidth footprint is significantly 
 different from sending legitimate single-packet HTTP requests with 
 single-packet responses; to the latter, it will have a very obvious 
 footprint if you are monitoring the number of responses/minute your 
 server is processing.
 
 Regardless, the only thing I've ever wanted to see changed in the server 
 which would somewhat mitigate this type of attack is to have coarser 
 granularity on timeouts, e.g. per-request-read, rather than simply 
 per-IO-operation.  (one of the few things 1.3 did better than 2.x, 
 though the *way* it did it was horrible)
 
 Regards, Joe
 



Re: Mitigating the Slowloris DoS attack

2009-06-24 Thread Andreas Krennmair

* Joe Orton jor...@redhat.com [2009-06-24 11:20]:
Meh.  There will always be a maximum to the number of concurrent 
connections a server can handle - be that hardware, kernel, or server 
design.  If you allow a single client to establish that number of 
connections it will deny service to other clients.


That is all that slowloris does, and you will always have to mitigate 
that kind of attack at network/router/firewall level.  It can be done 
today on Linux with a single trivial iptables rule, I'm sure the same is 
true of other kernels.


I think you confuse the PoC tool with the fundamental problem. You can't fend 
off this kind of attack at TCP level, at least not in cases where the n 
connections that block Apache are made by not 1 but n hosts.


Regards,
Andreas


Re: Mitigating the Slowloris DoS attack

2009-06-24 Thread Kevin J Walters

 M == Matthieu Estrade mestr...@apache.org writes:

M More granular timeout and maybe adaptative timeout is also IMHO a good
M way to improve resistance to this kind of attack.

The current 1.3, 2.0 and 2.2 documentation is in agreement too!

I believe the ssl module also takes its timeout value from this
setting. It would be great if that was separately configurable too to
cater for those intent on doing partial ssl handshakes.


  The TimeOut directive currently defines the amount of time Apache will wait 
for three things:

   1. The total amount of time it takes to receive a GET request.
   2. The amount of time between receipt of TCP packets on a POST or PUT 
request.
   3. The amount of time between ACKs on transmissions of TCP packets in 
responses.

  We plan on making these separately configurable at some point down the
  road. The timer used to default to 1200 before 1.2, but has been
  lowered to 300 which is still far more than necessary in most
  situations. It is not set any lower by default because there may still
  be odd places in the code where the timer is not reset when a packet
  is sent. 


regards

|evin

-- 
Kevin J Walters  Morgan Stanley
k...@ms.com   25 Cabot Square
Tel: 020 7425 7886   Canary Wharf
Fax: 020 7677 8504   London E14 4QA


Re: Mitigating the Slowloris DoS attack

2009-06-24 Thread Graham Dumpleton
2009/6/24 Kevin J Walters kevin.walt...@morganstanley.com:

 M == Matthieu Estrade mestr...@apache.org writes:

 M More granular timeout and maybe adaptative timeout is also IMHO a good
 M way to improve resistance to this kind of attack.

 The current 1.3, 2.0 and 2.2 documentation is in agreement too!

 I believe the ssl module also takes its timeout value from this
 setting. It would be great if that was separately configurable too to
 cater for those intent on doing partial ssl handshakes.


  The TimeOut directive currently defines the amount of time Apache will wait 
 for three things:

   1. The total amount of time it takes to receive a GET request.
   2. The amount of time between receipt of TCP packets on a POST or PUT 
 request.
   3. The amount of time between ACKs on transmissions of TCP packets in 
 responses.

  We plan on making these separately configurable at some point down the
  road. The timer used to default to 1200 before 1.2, but has been
  lowered to 300 which is still far more than necessary in most
  situations. It is not set any lower by default because there may still
  be odd places in the code where the timer is not reset when a packet
  is sent.

From what I understand, the server timeout value is also used to break
deadlocks in mod_cgi due to POST data being greater than the UNIX
socket buffer size and CGI script not reading POST data and then
returning a response greater than the UNIX socket buffer size. In
other words, CGI script blocks because Apache server child process
isn't reading response. The Apache server child process is blocked
still waiting for the CGI script to consume the response. The timeout
value breaks the deadlock. In this context, making the timeout too
small a value may have unintended consequences and affect how CGI
scripts work, so a separate timeout for mod_cgi would be preferable.

FWIW, the mod_cgid module doesn't appear to have this deadlock
detection so in practice this issue could in itself be used as a
denial of service vector when mod_cgid is used as it will completely
lock up the Apache child server thread with no failsafe to unblock it.
I have brought this issue up before on the list to get someone else to
analyse mod_cgid code and see if what I see is correct or not, but no
one seemed interested at the time, so took it that people didn't see
it as important. It may not have been seen as such a big issue as on
Linux systems the UNIX socket buffer size in the order of 220KB. On
MacOS X though, the UNIX socket buffer size is only 8KB, so much
easier to trigger. Unlike SendBufferSize and ReceiveBufferSize, there
are no directives to override these buffer sizes for mod_cgi and
mod_cgid.

Graham


Re: Mitigating the Slowloris DoS attack

2009-06-23 Thread Akins, Brian
On 6/22/09 10:40 PM, Weibin Yao nbubi...@gmail.com wrote:
 
 I have an idea to mitigate the problem: put the Nginx as a reverse proxy
 server in the front of apache.

Or a device that effectively acts as such.

-- 
Brian Akins
Chief Operations Engineer
Turner Digital Media Technologies



A modest proposal, was Re: Mitigating the Slowloris DoS attack

2009-06-23 Thread Akins, Brian
On 6/23/09 12:48 AM, Paul Querna p...@querna.org wrote:

 Mitagation is the wrong approach.
 
 We all know our architecture is wrong.

Another heretical suggestion:

Lighttpd and nginx are both release under BSD-like licenses.

Hear me out.

I've actually been thinking how possible would it be to transform one of
them into httpd 3.0? Nginx has a few architectural issues (a different
cache for fasctcgi versus proxy??) and lighttpd is still fairly immature
(cache can't handle Vary, lots of stuff broken when running multiple
processes).  However, just think if the forces of us and them combined
(well, one of them).  My personal pick is lighttpd - the community would fit
better (nginx is almost all in Russian) and it already has a lot of Lua :)

I know this would probably only even be considered in a bizzaro parallel
universe.  However, what are our alternatives?

-- 
Brian Akins
Chief Operations Engineer
Turner Digital Media Technologies



Re: Mitigating the Slowloris DoS attack

2009-06-22 Thread Andreas Krennmair

* Guenter Knauf fua...@apache.org [2009-06-22 04:30]:

wouldnt limiting the number of simultanous connections from one IP
already help? F.e. something like:
http://gpl.net.ua/modipcount/downloads.html


Not only would this be futile against the Slowloris attack (imagine n 
connections from n hosts instead of n connections from 1 host), it would also 
potentially lock out groups of people behind the same NAT gateway.


Regards,
Andreas


Re: Mitigating the Slowloris DoS attack

2009-06-22 Thread Dirk-Willem van Gulik

Guenter Knauf wrote:



Hi Andreas,
Andreas Krennmair schrieb:



For those who are still unaware of the Slowloris attack, it's a
denial-of-service attack that consumes Apache's resources by opening up
a great number of parallel connections and slowly sending partial



attack including a PoC tool was published here:
http://ha.ckers.org/slowloris/

I thought for some time about the whole issue, and then I developed a
proof-of-concept patch for Apache 2.2.11 (currently only touches the
prefork MPM), which you can download here:
http://synflood.at/tmp/anti-slowloris.diff



wouldnt limiting the number of simultanous connections from one IP
already help? F.e. something like:
http://gpl.net.ua/modipcount/downloads.html


Keep in mind that, if this attack turns into a real issue, it is likely 
to be through a vector like botnets. It is pretty common* to see lots of 
bits behind a single (corporate) NAT gateway.


You would not nessesarily want to penalize an entire interanet for their 
lack of security that way. That is not our job :).


Also - these things are only a problem when the server is resource tight 
- and even then - it could be modified to just invest little at that 
point -- either by having a different accept mechanism -or- by detecting 
sluggishness and then hading the connection back to something more 
async/single-threaded which deals with all slow connections - freeing up 
the 'full' worker for real work.


Dw

*: e.g. see the conflicker stats.


Re: Mitigating the Slowloris DoS attack

2009-06-22 Thread William A. Rowe, Jr.
Andreas Krennmair wrote:
 * Guenter Knauf fua...@apache.org [2009-06-22 04:30]:
 wouldnt limiting the number of simultanous connections from one IP
 already help? F.e. something like:
 http://gpl.net.ua/modipcount/downloads.html
 
 Not only would this be futile against the Slowloris attack (imagine n
 connections from n hosts instead of n connections from 1 host), it would
 also potentially lock out groups of people behind the same NAT gateway.

FWIW mod_remoteip can be used to partially mitigate the weakness of this
class of solutions.

However, it only works for known, trusted proxies, and can only be safely
used for those with public IP's.  Where the same 10.0.0.5 on your private
NAT backed becomes the same 10.0.0.5 within the apache server's DMZ, the
issues like Allow from 10.0.0.0/8 become painfully obvious.  I haven't
found a good solution, but mod_remoteip still needs one, eventually.


Re: Mitigating the Slowloris DoS attack

2009-06-22 Thread Matthieu Estrade
Hi,

How about coding a module looking how many bytes are read and if there
is too little chunk of data, close the connection.
Something like a MinDataReadSize. If the read() function read too little
data, close() the socket... Dunno if it's possible to hook directly in
connection hook to do this...

Matthieu

William A. Rowe, Jr. wrote:
 Andreas Krennmair wrote:
 * Guenter Knauf fua...@apache.org [2009-06-22 04:30]:
 wouldnt limiting the number of simultanous connections from one IP
 already help? F.e. something like:
 http://gpl.net.ua/modipcount/downloads.html
 Not only would this be futile against the Slowloris attack (imagine n
 connections from n hosts instead of n connections from 1 host), it would
 also potentially lock out groups of people behind the same NAT gateway.
 
 FWIW mod_remoteip can be used to partially mitigate the weakness of this
 class of solutions.
 
 However, it only works for known, trusted proxies, and can only be safely
 used for those with public IP's.  Where the same 10.0.0.5 on your private
 NAT backed becomes the same 10.0.0.5 within the apache server's DMZ, the
 issues like Allow from 10.0.0.0/8 become painfully obvious.  I haven't
 found a good solution, but mod_remoteip still needs one, eventually.
 



Re: Mitigating the Slowloris DoS attack

2009-06-22 Thread Weibin Yao

William A. Rowe, Jr. at 2009-6-23 2:00 wrote:

Andreas Krennmair wrote:
  

* Guenter Knauf fua...@apache.org [2009-06-22 04:30]:


wouldnt limiting the number of simultanous connections from one IP
already help? F.e. something like:
http://gpl.net.ua/modipcount/downloads.html
  

Not only would this be futile against the Slowloris attack (imagine n
connections from n hosts instead of n connections from 1 host), it would
also potentially lock out groups of people behind the same NAT gateway.



FWIW mod_remoteip can be used to partially mitigate the weakness of this
class of solutions.

However, it only works for known, trusted proxies, and can only be safely
used for those with public IP's.  Where the same 10.0.0.5 on your private
NAT backed becomes the same 10.0.0.5 within the apache server's DMZ, the
issues like Allow from 10.0.0.0/8 become painfully obvious.  I haven't
found a good solution, but mod_remoteip still needs one, eventually.

  
I have an idea to mitigate the problem: put the Nginx as a reverse proxy 
server in the front of apache.


--
Weibin Yao



Re: Mitigating the Slowloris DoS attack

2009-06-22 Thread Graham Dumpleton
2009/6/23 Weibin Yao nbubi...@gmail.com:
 William A. Rowe, Jr. at 2009-6-23 2:00 wrote:

 Andreas Krennmair wrote:


 * Guenter Knauf fua...@apache.org [2009-06-22 04:30]:


 wouldnt limiting the number of simultanous connections from one IP
 already help? F.e. something like:
 http://gpl.net.ua/modipcount/downloads.html


 Not only would this be futile against the Slowloris attack (imagine n
 connections from n hosts instead of n connections from 1 host), it would
 also potentially lock out groups of people behind the same NAT gateway.


 FWIW mod_remoteip can be used to partially mitigate the weakness of this
 class of solutions.

 However, it only works for known, trusted proxies, and can only be safely
 used for those with public IP's.  Where the same 10.0.0.5 on your private
 NAT backed becomes the same 10.0.0.5 within the apache server's DMZ, the
 issues like Allow from 10.0.0.0/8 become painfully obvious.  I haven't
 found a good solution, but mod_remoteip still needs one, eventually.



 I have an idea to mitigate the problem: put the Nginx as a reverse proxy
 server in the front of apache.

Although your comment is perhaps heresy here, it does highlight one of
the things that nginx is good at, even if you don't use it to serve
static files with Apache handling just the dynamic web application.
That is, that it can isolate Apache from slow clients, whether that be
an attack as in this case, or just normal users using slow networks.
The proxy module of nginx in the way it will buffer up request content
to disk before actually sending the request onto the backend also
helps by not tying up Apache's limited request handler threads until
the request content is completely available, although, nginx does have
an upper limit on this at some point and will still stream when the
post content is large enough.

The nginx server works better at avoiding problems with slow clients
because it is event driven rather than threaded and so can handle more
connections without needing to tie up expensive threads.
Unfortunately, trying to make socket accept handling in Apache be
event driven and for requests to only be handed off to a thread for
processing when ready can introduce its own problems. This is because
an event driven system can tend to greedily accept new socket
connections. In a multiprocess server configuration this can mean that
a single process may accept more than its fair share of socket
connections and by the time it has read the initial request headers,
may not have enough available threads to handle the requests. In the
mean time, another server process, which did not get in quick enough
to accept some of the connections could be sitting their idle. How you
mediate between multiple servers to avoid this sort of problem would
be tricky if it can be done.

Anyway, now for a hair brained suggestion that could bring some of
this nginx goodness to Apache. Although no doubt it would have various
limitations which to solve properly and be integrated seamlessly into
Apache would require some changes in the core.

The idea here is to have an Apache module which spawns off its own
child process which implements a very small lightweight event driven
proxy that listens on the real listener sockets you want to expose.
This processes sole job would then be to handle reading in the request
headers, and perhaps optionally buffering up request content, and then
squirt it across to real Apache child server processes to be handled
when it has all the information it needs. To that end, it wouldn't be
a general purpose proxy but quite customised. As such, it could even
perhaps be made more efficient than nginx in the way it is used to
protect Apache from such things as slow clients.

For HTTP at least, this probably wouldn't be too hard to do and
wouldn't likely need any changes to the core. You could even have
whether you use it be optional to the extent of it only applying to
certain virtual hosts. Where it does though all get a lot harder is
virtual hosts which use HTTPS.

So, that is my crazy thought for the day and am sure that it will be
derided for what is is worth.

I still find the thought interesting though and it falls into that
class of things I find interesting due to the challenge it presents.
:-)

Graham


Re: Mitigating the Slowloris DoS attack

2009-06-22 Thread Paul Querna
On Sun, Jun 21, 2009 at 4:10 AM, Andreas Krennmaira...@synflood.at wrote:
 Hello everyone,
.
 The basic principle is that the timeout for new connections is adjusted
 according to the current load on the Apache instance: a load percentage is
 computed in the perform_idle_server_maintenance() routine and made available
 through the global scoreboard. Whenever the timeout is set, the current load
 percentage is taken into account. The result is that slowly sending
 connections are dropped due to a timeout, while legitimate, fast-sending
 connections are still being served. While this approach doesn't completely
 fix the issue, it mitigates the negative impact of the Slowloris attack.

Mitagation is the wrong approach.

We all know our architecture is wrong.

We have started on fixing it, but we need to finish the async input
rewrite on trunk, but all of the people who have hacked on it, myself
included have hit ENOTIME for the last several years.

Hopefully the publicity this has generated will get renewed interest
in solving this problem the right way, once and for all :)

It doesn't need to be the simple mpm, or the event mpm, its not even
about MPMs, its about how the whole input filter stack works.

So.. i write yet another email about it... and disappear in the ether
of ENOTIME once again.

-Paul


Re: Mitigating the Slowloris DoS attack

2009-06-22 Thread Paul Querna
On Mon, Jun 22, 2009 at 9:07 PM, Graham
Dumpletongraham.dumple...@gmail.com wrote:
 2009/6/23 Weibin Yao nbubi...@gmail.com:
 William A. Rowe, Jr. at 2009-6-23 2:00 wrote:

 Andreas Krennmair wrote:


 * Guenter Knauf fua...@apache.org [2009-06-22 04:30]:


 wouldnt limiting the number of simultanous connections from one IP
 already help? F.e. something like:
 http://gpl.net.ua/modipcount/downloads.html


 Not only would this be futile against the Slowloris attack (imagine n
 connections from n hosts instead of n connections from 1 host), it would
 also potentially lock out groups of people behind the same NAT gateway.


 FWIW mod_remoteip can be used to partially mitigate the weakness of this
 class of solutions.

 However, it only works for known, trusted proxies, and can only be safely
 used for those with public IP's.  Where the same 10.0.0.5 on your private
 NAT backed becomes the same 10.0.0.5 within the apache server's DMZ, the
 issues like Allow from 10.0.0.0/8 become painfully obvious.  I haven't
 found a good solution, but mod_remoteip still needs one, eventually.



 I have an idea to mitigate the problem: put the Nginx as a reverse proxy
 server in the front of apache.

 Although your comment is perhaps heresy here, it does highlight one of
 the things that nginx is good at, even if you don't use it to serve
 static files with Apache handling just the dynamic web application.
 That is, that it can isolate Apache from slow clients, whether that be
 an attack as in this case, or just normal users using slow networks.
 The proxy module of nginx in the way it will buffer up request content
 to disk before actually sending the request onto the backend also
 helps by not tying up Apache's limited request handler threads until
 the request content is completely available, although, nginx does have
 an upper limit on this at some point and will still stream when the
 post content is large enough.

 The nginx server works better at avoiding problems with slow clients
 because it is event driven rather than threaded and so can handle more
 connections without needing to tie up expensive threads.
 Unfortunately, trying to make socket accept handling in Apache be
 event driven and for requests to only be handed off to a thread for
 processing when ready can introduce its own problems. This is because
 an event driven system can tend to greedily accept new socket
 connections. In a multiprocess server configuration this can mean that
 a single process may accept more than its fair share of socket
 connections and by the time it has read the initial request headers,
 may not have enough available threads to handle the requests. In the
 mean time, another server process, which did not get in quick enough
 to accept some of the connections could be sitting their idle. How you
 mediate between multiple servers to avoid this sort of problem would
 be tricky if it can be done.

 Anyway, now for a hair brained suggestion that could bring some of
 this nginx goodness to Apache. Although no doubt it would have various
 limitations which to solve properly and be integrated seamlessly into
 Apache would require some changes in the core.

 The idea here is to have an Apache module which spawns off its own
 child process which implements a very small lightweight event driven
 proxy that listens on the real listener sockets you want to expose.
 This processes sole job would then be to handle reading in the request
 headers, and perhaps optionally buffering up request content, and then
 squirt it across to real Apache child server processes to be handled
 when it has all the information it needs. To that end, it wouldn't be
 a general purpose proxy but quite customised. As such, it could even
 perhaps be made more efficient than nginx in the way it is used to
 protect Apache from such things as slow clients.

 For HTTP at least, this probably wouldn't be too hard to do and
 wouldn't likely need any changes to the core. You could even have
 whether you use it be optional to the extent of it only applying to
 certain virtual hosts. Where it does though all get a lot harder is
 virtual hosts which use HTTPS.

 So, that is my crazy thought for the day and am sure that it will be
 derided for what is is worth.

Yes, I think the idea is a little crazy, we just need to fix the input
filters, encourage the use of the event mpm, along with FastCGI as a
connector then most of these problems go away :(


Mitigating the Slowloris DoS attack

2009-06-21 Thread Andreas Krennmair

Hello everyone,

Previously, I had contacted the Apache Security Team about a possible 
mitigation of the Slowloris DoS attack. I was referred to this mailing list to 
discuss non-private security issues.


For those who are still unaware of the Slowloris attack, it's a 
denial-of-service attack that consumes Apache's resources by opening up a 
great number of parallel connections and slowly sending partial requests, 
never completing them. Since Apache limits the number of parallel clients it 
serves (the MaxClients setting), this blocks further requests from being 
completed. Unlike other traditional TCP DoS attacks, this HTTP-based DoS 
attack requires only very little network traffic in order to be effective.  
Information about the Slowloris attack including a PoC tool was published 
here: http://ha.ckers.org/slowloris/


I thought for some time about the whole issue, and then I developed a 
proof-of-concept patch for Apache 2.2.11 (currently only touches the prefork 
MPM), which you can download here: http://synflood.at/tmp/anti-slowloris.diff


The basic principle is that the timeout for new connections is adjusted 
according to the current load on the Apache instance: a load percentage is 
computed in the perform_idle_server_maintenance() routine and made available 
through the global scoreboard. Whenever the timeout is set, the current load 
percentage is taken into account. The result is that slowly sending 
connections are dropped due to a timeout, while legitimate, fast-sending 
connections are still being served. While this approach doesn't completely fix 
the issue, it mitigates the negative impact of the Slowloris attack. Even 
under heavy load, legitimate requests are still being served, even though it - 
in my tests - in took a bit longer than usual. And the kind of heavy load that 
I needed to slow down Apache was already quite traffic-intensive, i.e. it 
defeated one of Slowloris' goals, namely having a low traffic footprint that 
would make the attack hard to detect.


Please be aware that the patch mentioned above is of proof-of-concept quality: 
the numbers in the adjust_timeout() function were chosen more or less 
arbitrarily, just tuned well enough to successfully mitigate the impact of a 
Slowloris attack in my testing environment.


Regards,
Andreas


Re: Mitigating the Slowloris DoS attack

2009-06-21 Thread Guenter Knauf
Hi Andreas,
Andreas Krennmair schrieb:
 For those who are still unaware of the Slowloris attack, it's a
 denial-of-service attack that consumes Apache's resources by opening up
 a great number of parallel connections and slowly sending partial
 requests, never completing them. Since Apache limits the number of
 parallel clients it serves (the MaxClients setting), this blocks further
 requests from being completed. Unlike other traditional TCP DoS
 attacks, this HTTP-based DoS attack requires only very little network
 traffic in order to be effective.  Information about the Slowloris
 attack including a PoC tool was published here:
 http://ha.ckers.org/slowloris/
 
 I thought for some time about the whole issue, and then I developed a
 proof-of-concept patch for Apache 2.2.11 (currently only touches the
 prefork MPM), which you can download here:
 http://synflood.at/tmp/anti-slowloris.diff
wouldnt limiting the number of simultanous connections from one IP
already help? F.e. something like:
http://gpl.net.ua/modipcount/downloads.html

Guenter.



Re: Mitigating the Slowloris DoS attack

2009-06-21 Thread Graham Dumpleton
2009/6/22 Guenter Knauf fua...@apache.org:
 Hi Andreas,
 Andreas Krennmair schrieb:
 For those who are still unaware of the Slowloris attack, it's a
 denial-of-service attack that consumes Apache's resources by opening up
 a great number of parallel connections and slowly sending partial
 requests, never completing them. Since Apache limits the number of
 parallel clients it serves (the MaxClients setting), this blocks further
 requests from being completed. Unlike other traditional TCP DoS
 attacks, this HTTP-based DoS attack requires only very little network
 traffic in order to be effective.  Information about the Slowloris
 attack including a PoC tool was published here:
 http://ha.ckers.org/slowloris/

 I thought for some time about the whole issue, and then I developed a
 proof-of-concept patch for Apache 2.2.11 (currently only touches the
 prefork MPM), which you can download here:
 http://synflood.at/tmp/anti-slowloris.diff
 wouldnt limiting the number of simultanous connections from one IP
 already help? F.e. something like:
 http://gpl.net.ua/modipcount/downloads.html

Not if the attack is launched from a botnet, which is the more likely
scenario for people who really want to hide their tracks.

BTW, focus here seems to be on the reading of the request headers
themselves. Can't trickling of actual request content data to a URL
equally tie up handler threads. Either in the case where request
handler is doing the reads of request content, or for the case of
success status, by ap_discard_request_body() at the end of the request
and where HTTP/1.1 and keep alive requested.

The only difference really is that if done with request headers,
nothing would be logged about it in access logs, so not easy to track.

Graham