Re: Mitigating the Slowloris DoS attack
On Mon, Jun 22, 2009 at 9:07 PM, Graham Dumpleton wrote: > 2009/6/23 Weibin Yao : >> William A. Rowe, Jr. at 2009-6-23 2:00 wrote: >>> >>> Andreas Krennmair wrote: >>> * Guenter Knauf [2009-06-22 04:30]: > > wouldnt limiting the number of simultanous connections from one IP > already help? F.e. something like: > http://gpl.net.ua/modipcount/downloads.html > Not only would this be futile against the Slowloris attack (imagine n connections from n hosts instead of n connections from 1 host), it would also potentially lock out groups of people behind the same NAT gateway. >>> >>> FWIW mod_remoteip can be used to partially mitigate the weakness of this >>> class of solutions. >>> >>> However, it only works for known, trusted proxies, and can only be safely >>> used for those with public IP's. Where the same 10.0.0.5 on your private >>> NAT backed becomes the same 10.0.0.5 within the apache server's DMZ, the >>> issues like Allow from 10.0.0.0/8 become painfully obvious. I haven't >>> found a good solution, but mod_remoteip still needs one, eventually. >>> >>> >> >> I have an idea to mitigate the problem: put the Nginx as a reverse proxy >> server in the front of apache. > > Although your comment is perhaps heresy here, it does highlight one of > the things that nginx is good at, even if you don't use it to serve > static files with Apache handling just the dynamic web application. > That is, that it can isolate Apache from slow clients, whether that be > an attack as in this case, or just normal users using slow networks. > The proxy module of nginx in the way it will buffer up request content > to disk before actually sending the request onto the backend also > helps by not tying up Apache's limited request handler threads until > the request content is completely available, although, nginx does have > an upper limit on this at some point and will still stream when the > post content is large enough. > > The nginx server works better at avoiding problems with slow clients > because it is event driven rather than threaded and so can handle more > connections without needing to tie up expensive threads. > Unfortunately, trying to make socket accept handling in Apache be > event driven and for requests to only be handed off to a thread for > processing when ready can introduce its own problems. This is because > an event driven system can tend to greedily accept new socket > connections. In a multiprocess server configuration this can mean that > a single process may accept more than its fair share of socket > connections and by the time it has read the initial request headers, > may not have enough available threads to handle the requests. In the > mean time, another server process, which did not get in quick enough > to accept some of the connections could be sitting their idle. How you > mediate between multiple servers to avoid this sort of problem would > be tricky if it can be done. > > Anyway, now for a hair brained suggestion that could bring some of > this nginx goodness to Apache. Although no doubt it would have various > limitations which to solve properly and be integrated seamlessly into > Apache would require some changes in the core. > > The idea here is to have an Apache module which spawns off its own > child process which implements a very small lightweight event driven > proxy that listens on the real listener sockets you want to expose. > This processes sole job would then be to handle reading in the request > headers, and perhaps optionally buffering up request content, and then > squirt it across to real Apache child server processes to be handled > when it has all the information it needs. To that end, it wouldn't be > a general purpose proxy but quite customised. As such, it could even > perhaps be made more efficient than nginx in the way it is used to > protect Apache from such things as slow clients. > > For HTTP at least, this probably wouldn't be too hard to do and > wouldn't likely need any changes to the core. You could even have > whether you use it be optional to the extent of it only applying to > certain virtual hosts. Where it does though all get a lot harder is > virtual hosts which use HTTPS. > > So, that is my crazy thought for the day and am sure that it will be > derided for what is is worth. Yes, I think the idea is a little crazy, we just need to fix the input filters, encourage the use of the event mpm, along with FastCGI as a connector then most of these problems go away :(
Re: Mitigating the Slowloris DoS attack
On Sun, Jun 21, 2009 at 4:10 AM, Andreas Krennmair wrote: > Hello everyone, . > The basic principle is that the timeout for new connections is adjusted > according to the current load on the Apache instance: a load percentage is > computed in the perform_idle_server_maintenance() routine and made available > through the global scoreboard. Whenever the timeout is set, the current load > percentage is taken into account. The result is that slowly sending > connections are dropped due to a timeout, while legitimate, fast-sending > connections are still being served. While this approach doesn't completely > fix the issue, it mitigates the negative impact of the Slowloris attack. Mitagation is the wrong approach. We all know our architecture is wrong. We have started on fixing it, but we need to finish the async input rewrite on trunk, but all of the people who have hacked on it, myself included have hit ENOTIME for the last several years. Hopefully the publicity this has generated will get renewed interest in solving this problem the right way, once and for all :) It doesn't need to be the simple mpm, or the event mpm, its not even about MPMs, its about how the whole input filter stack works. So.. i write yet another email about it... and disappear in the ether of ENOTIME once again. -Paul
Re: Mitigating the Slowloris DoS attack
2009/6/23 Weibin Yao : > William A. Rowe, Jr. at 2009-6-23 2:00 wrote: >> >> Andreas Krennmair wrote: >> >>> >>> * Guenter Knauf [2009-06-22 04:30]: >>> wouldnt limiting the number of simultanous connections from one IP already help? F.e. something like: http://gpl.net.ua/modipcount/downloads.html >>> >>> Not only would this be futile against the Slowloris attack (imagine n >>> connections from n hosts instead of n connections from 1 host), it would >>> also potentially lock out groups of people behind the same NAT gateway. >>> >> >> FWIW mod_remoteip can be used to partially mitigate the weakness of this >> class of solutions. >> >> However, it only works for known, trusted proxies, and can only be safely >> used for those with public IP's. Where the same 10.0.0.5 on your private >> NAT backed becomes the same 10.0.0.5 within the apache server's DMZ, the >> issues like Allow from 10.0.0.0/8 become painfully obvious. I haven't >> found a good solution, but mod_remoteip still needs one, eventually. >> >> > > I have an idea to mitigate the problem: put the Nginx as a reverse proxy > server in the front of apache. Although your comment is perhaps heresy here, it does highlight one of the things that nginx is good at, even if you don't use it to serve static files with Apache handling just the dynamic web application. That is, that it can isolate Apache from slow clients, whether that be an attack as in this case, or just normal users using slow networks. The proxy module of nginx in the way it will buffer up request content to disk before actually sending the request onto the backend also helps by not tying up Apache's limited request handler threads until the request content is completely available, although, nginx does have an upper limit on this at some point and will still stream when the post content is large enough. The nginx server works better at avoiding problems with slow clients because it is event driven rather than threaded and so can handle more connections without needing to tie up expensive threads. Unfortunately, trying to make socket accept handling in Apache be event driven and for requests to only be handed off to a thread for processing when ready can introduce its own problems. This is because an event driven system can tend to greedily accept new socket connections. In a multiprocess server configuration this can mean that a single process may accept more than its fair share of socket connections and by the time it has read the initial request headers, may not have enough available threads to handle the requests. In the mean time, another server process, which did not get in quick enough to accept some of the connections could be sitting their idle. How you mediate between multiple servers to avoid this sort of problem would be tricky if it can be done. Anyway, now for a hair brained suggestion that could bring some of this nginx goodness to Apache. Although no doubt it would have various limitations which to solve properly and be integrated seamlessly into Apache would require some changes in the core. The idea here is to have an Apache module which spawns off its own child process which implements a very small lightweight event driven proxy that listens on the real listener sockets you want to expose. This processes sole job would then be to handle reading in the request headers, and perhaps optionally buffering up request content, and then squirt it across to real Apache child server processes to be handled when it has all the information it needs. To that end, it wouldn't be a general purpose proxy but quite customised. As such, it could even perhaps be made more efficient than nginx in the way it is used to protect Apache from such things as slow clients. For HTTP at least, this probably wouldn't be too hard to do and wouldn't likely need any changes to the core. You could even have whether you use it be optional to the extent of it only applying to certain virtual hosts. Where it does though all get a lot harder is virtual hosts which use HTTPS. So, that is my crazy thought for the day and am sure that it will be derided for what is is worth. I still find the thought interesting though and it falls into that class of things I find interesting due to the challenge it presents. :-) Graham
Re: Mitigating the Slowloris DoS attack
William A. Rowe, Jr. at 2009-6-23 2:00 wrote: Andreas Krennmair wrote: * Guenter Knauf [2009-06-22 04:30]: wouldnt limiting the number of simultanous connections from one IP already help? F.e. something like: http://gpl.net.ua/modipcount/downloads.html Not only would this be futile against the Slowloris attack (imagine n connections from n hosts instead of n connections from 1 host), it would also potentially lock out groups of people behind the same NAT gateway. FWIW mod_remoteip can be used to partially mitigate the weakness of this class of solutions. However, it only works for known, trusted proxies, and can only be safely used for those with public IP's. Where the same 10.0.0.5 on your private NAT backed becomes the same 10.0.0.5 within the apache server's DMZ, the issues like Allow from 10.0.0.0/8 become painfully obvious. I haven't found a good solution, but mod_remoteip still needs one, eventually. I have an idea to mitigate the problem: put the Nginx as a reverse proxy server in the front of apache. -- Weibin Yao
Re: Mitigating the Slowloris DoS attack
Hi, How about coding a module looking how many bytes are read and if there is too little chunk of data, close the connection. Something like a MinDataReadSize. If the read() function read too little data, close() the socket... Dunno if it's possible to hook directly in connection hook to do this... Matthieu William A. Rowe, Jr. wrote: > Andreas Krennmair wrote: >> * Guenter Knauf [2009-06-22 04:30]: >>> wouldnt limiting the number of simultanous connections from one IP >>> already help? F.e. something like: >>> http://gpl.net.ua/modipcount/downloads.html >> Not only would this be futile against the Slowloris attack (imagine n >> connections from n hosts instead of n connections from 1 host), it would >> also potentially lock out groups of people behind the same NAT gateway. > > FWIW mod_remoteip can be used to partially mitigate the weakness of this > class of solutions. > > However, it only works for known, trusted proxies, and can only be safely > used for those with public IP's. Where the same 10.0.0.5 on your private > NAT backed becomes the same 10.0.0.5 within the apache server's DMZ, the > issues like Allow from 10.0.0.0/8 become painfully obvious. I haven't > found a good solution, but mod_remoteip still needs one, eventually. >
Re: build mod_proxy by source
Look: $ ~/micex/opt/httpd-worker/bin/apxs -c -o mod_proxy.so mod_proxy.c proxy_util.c /home/marko/micex/opt/httpd-worker/build/libtool --silent --mode=compile gcc -prefer-pic -g -DLINUX=2 -D_REENTRANT -D_GNU_SOURCE -pthread -I/home/marko/micex/opt/httpd-worker/include -I/home/marko/micex/opt/httpd-worker/include -I/home/marko/micex/opt/httpd-worker/include -c -o mod_proxy.lo mod_proxy.c && touch mod_proxy.slo /home/marko/micex/opt/httpd-worker/build/libtool --silent --mode=compile gcc -prefer-pic -g -DLINUX=2 -D_REENTRANT -D_GNU_SOURCE -pthread -I/home/marko/micex/opt/httpd-worker/include -I/home/marko/micex/opt/httpd-worker/include -I/home/marko/micex/opt/httpd-worker/include -c -o proxy_util.lo proxy_util.c && touch proxy_util.slo /home/marko/micex/opt/httpd-worker/build/libtool --silent --mode=link gcc -o mod_proxy.la -rpath /home/marko/micex/opt/httpd-worker/modules -module -avoid-versionproxy_util.lo mod_proxy.lo $ ls -la .libs/mod_proxy.so -rwxr-xr-x 1 marko marko 177683 2009-06-23 01:30 .libs/mod_proxy.so On Tue, Jun 23, 2009 at 12:14 AM, h iroshan wrote: > > hi Kevac Marko, > > apxs -c -o mod_proxy.so mod_proxy.c proxy_util.c > > above command not generate mod_proxy.so . Please help me > > Regards > Iroshan > -- Marko Kevac
Re: [Fwd: Slowloris]
On Mon, Jun 22, 2009 at 02:23:12PM +0200, Dirk-Willem van Gulik wrote: >>> -Seriously rewrite apache/add a worker which mimics the >>> accept_filter.ko >>> of freebsd somewhat in that it as a single threaded async select() >>> loop >>> which buffers things up until they are cooked enough (i.e. the >>> client has >>> enough skin in the game) to hand off to a real worker. Is not this mechanism limited to HTTP and misses HTTPS? So I do not think it can be a general solution. I am not an apache developer, but would not the event mpm be of some use in this case? Otherwise, I see a lack of granular timeout values. RSnake's latest take can be fought with a low KeepAliveTimeout (-> http://ha.ckers.org/blog/20090620/http-longevity-during-dos/) One should be able to assign timeouts to other request phases too. And it should be possible to set these timeouts in a way that a subsequent header or a single post payload byte is not resetting them to zero again. Just my 2 cents Christian Folini -- If you shut your door to all errors truth will be shut out. --- Rabindranath Tagore
Re: build mod_proxy by source
hi Kevac Marko, apxs -c -o mod_proxy.so mod_proxy.c proxy_util.c above command not generate mod_proxy.so . Please help me Regards Iroshan
Re: build mod_proxy by source
hi Kevac Marko, Thank you very much.
Re: build mod_proxy by source
apxs -c -o mod_proxy.so mod_proxy.c proxy_util.c On Mon, Jun 22, 2009 at 11:00 PM, h iroshan wrote: > Hi All, > > I need to build mod_proxy by source rather than enable in the > configuration. I dont know how to build it by apxs as it has two dependent > files (proxy_util.c and mod_proxy.c) .Please help me to over come this > problem. > > Best Regards, > > Iroshan > Under Graduate-UCSC > Sri Lanka > > > -- Marko Kevac
build mod_proxy by source
Hi All, I need to build mod_proxy by source rather than enable in the configuration. I dont know how to build it by apxs as it has two dependent files (proxy_util.c and mod_proxy.c) .Please help me to over come this problem. Best Regards, Iroshan Under Graduate-UCSC Sri Lanka
Re: Apache requires read permissions for parent directories of configuration files
William A. Rowe, Jr. wrote: > Ivan Zhakov wrote: > >> * is it possible to remove APR_FILEPATH_TRUENAME argument in the trunk >> of Apache HTTP Server? (see attached patch) > > -1, veto for such a change. > > Change this and httpd and even third party modules can ultimately discover > their configuration file is invalid, leading to security exposures. FWIW - I'm willing to entertain a change to record each failed true name resolution lookup in the error log (Failed to resolve true pathname of C:\ABC, file permissions problem?). This will become extremely noisy in the error log very quickly when it happens several times per request, but I suspect it's better than failure that admins can't explain.
Re: Apache requires read permissions for parent directories of configuration files
Ivan Zhakov wrote: > > I encountered the following problem with Apache HTTPD on Windows: > * lets suppose that server root is "C:\ABC\XYZ\root"; > * httpd service have all appropriate access permissions > for the server root; > * but httpd service doesn't have any access permission for > the parents of the root. E.g. httpd service doesn't have > access to "C:\ABC" and "C:\ABC\XYZ" > * in this case httpd failed to start with error message "Invalid file > path C:\ABC\XYZ\root\conf\htpasswd" if AuthUserFile directive used. > > We did researched and found that this happens with most Apache > directives, because they use a function ap_server_root_relative (), > which in turn causes apr_filepath_merge () with a flag > APR_FILEPATH_TRUENAME. > > This change was introduced in r90571 [1], before r90571 > ap_make_full_path() used which does not perform file path resolution > like apr_filepath_merge with flag APR_FILEPATH_TRUENAME does. Yes; this change is by design... > We have the following questions: > * what is the reason to use APR_FILEPATH_TRUENAME argument in that place? How do you suggest that in the httpd.conf that Apache disambiguates C:\ABC from C:\abc, or worse yet, C:\abacadabara from C:\abacab~1 etc? Without resolving the true path elements it's very difficult to do this. Therefore the make full path ensures that two file names in two different directives, or the resolved path and the path given by directive can be authoritatively compared for equality. > * is it possible to remove APR_FILEPATH_TRUENAME argument in the trunk > of Apache HTTP Server? (see attached patch) -1, veto for such a change. Change this and httpd and even third party modules can ultimately discover their configuration file is invalid, leading to security exposures.
Re: Mitigating the Slowloris DoS attack
Andreas Krennmair wrote: > * Guenter Knauf [2009-06-22 04:30]: >> wouldnt limiting the number of simultanous connections from one IP >> already help? F.e. something like: >> http://gpl.net.ua/modipcount/downloads.html > > Not only would this be futile against the Slowloris attack (imagine n > connections from n hosts instead of n connections from 1 host), it would > also potentially lock out groups of people behind the same NAT gateway. FWIW mod_remoteip can be used to partially mitigate the weakness of this class of solutions. However, it only works for known, trusted proxies, and can only be safely used for those with public IP's. Where the same 10.0.0.5 on your private NAT backed becomes the same 10.0.0.5 within the apache server's DMZ, the issues like Allow from 10.0.0.0/8 become painfully obvious. I haven't found a good solution, but mod_remoteip still needs one, eventually.
Apache requires read permissions for parent directories of configuration files
Hi, I encountered the following problem with Apache HTTPD on Windows: * lets suppose that server root is "C:\ABC\XYZ\root"; * httpd service have all appropriate access permissions for the server root; * but httpd service doesn't have any access permission for the parents of the root. E.g. httpd service doesn't have access to "C:\ABC" and "C:\ABC\XYZ" * in this case httpd failed to start with error message "Invalid file path C:\ABC\XYZ\root\conf\htpasswd" if AuthUserFile directive used. We did researched and found that this happens with most Apache directives, because they use a function ap_server_root_relative (), which in turn causes apr_filepath_merge () with a flag APR_FILEPATH_TRUENAME. This change was introduced in r90571 [1], before r90571 ap_make_full_path() used which does not perform file path resolution like apr_filepath_merge with flag APR_FILEPATH_TRUENAME does. We have the following questions: * what is the reason to use APR_FILEPATH_TRUENAME argument in that place? * is it possible to remove APR_FILEPATH_TRUENAME argument in the trunk of Apache HTTP Server? (see attached patch) Any comments will be helpful. [1] http://svn.apache.org/viewvc?view=rev&revision=90571 -- Ivan Zhakov VisualSVN Team --- server\config.c.orig 2008-12-02 16:28:22.0 +0300 +++ server\config.c 2009-06-21 23:35:24.41200 +0400 @@ -1351,8 +1351,8 @@ { char *newpath = NULL; apr_status_t rv; -rv = apr_filepath_merge(&newpath, ap_server_root, file, -APR_FILEPATH_TRUENAME, p); +rv = apr_filepath_merge(&newpath, ap_server_root, file, 0, p); + if (newpath && (rv == APR_SUCCESS || APR_STATUS_IS_EPATHWILD(rv) || APR_STATUS_IS_ENOENT(rv) || APR_STATUS_IS_ENOTDIR(rv))) {
Re: [Fwd: Slowloris]
(moved to dev@ - as this issue is now perfectly public). Ben Laurie wrote: Dirk-Willem van Gulik wrote: Ben Laurie wrote: What does that matter? If you need to do it less to Apache, then Apache is broken in comparison to the others. Completely agreed - no need to get into a spitting match as to whom is most broken. We had the same problem in 96 or so - and they where a total pain to deal with. Options of dealing with this can be -Very agressive timeouts and intentionally delaying/increasing the cost of the TCP setup - but you are in freebsd/solaris style kernel filters. -Very agressive timeouts generally - but you penalize the 14k4 modem users. -Binning users after a while in such a group - but then you penalize certain ISPs or NAT-blocks. -Not do much - but a graded response when you get resource tight; i.e. start prioritizing 'active' connections over slow ones. Either by making the timeouts an exponentional function of the load or by some simple binning (which is what we did in phase 2). -Hand off (too) inactive conncetions to something cheaper - this is what we did in the final phase - using a single thread, select() loop with fixed buffer footprint. However that used a solaris inter process 'file descriptor passing' message - which I guess is out of vogue now. Why? This is actually quite in vogue for security reasons :-) Sounds I have missed something. Blush :) (Especially after reading up on all the work in openbsd :)!). Having read up on it a bit - so fair to conclude that the mechanism for passing file descriptors between processes is now a solid cross platform thing ? But I am no seeing something easy in APR ? Do we have modules already doing this ? And really - in this day and age you propably want to tell your switch/router/network-piece-of-kit/dog to move the TCP to another machine. And I have no idea if there are any API's for this which are cross vendor. -Seriously rewrite apache/add a worker which mimics the accept_filter.ko of freebsd somewhat in that it as a single threaded async select() loop which buffers things up until they are cooked enough (i.e. the client has enough skin in the game) to hand off to a real worker. Any more approaches possible ? Dw
Re: Mitigating the Slowloris DoS attack
Guenter Knauf wrote: Hi Andreas, Andreas Krennmair schrieb: For those who are still unaware of the Slowloris attack, it's a denial-of-service attack that consumes Apache's resources by opening up a great number of parallel connections and slowly sending partial attack including a PoC tool was published here: http://ha.ckers.org/slowloris/ I thought for some time about the whole issue, and then I developed a proof-of-concept patch for Apache 2.2.11 (currently only touches the prefork MPM), which you can download here: http://synflood.at/tmp/anti-slowloris.diff wouldnt limiting the number of simultanous connections from one IP already help? F.e. something like: http://gpl.net.ua/modipcount/downloads.html Keep in mind that, if this attack turns into a real issue, it is likely to be through a vector like botnets. It is pretty common* to see lots of bits behind a single (corporate) NAT gateway. You would not nessesarily want to penalize an entire interanet for their lack of security that way. That is not our job :). Also - these things are only a problem when the server is resource tight - and even then - it could be modified to just invest little at that point -- either by having a different accept mechanism -or- by detecting sluggishness and then hading the connection back to something more async/single-threaded which deals with all slow connections - freeing up the 'full' worker for real work. Dw *: e.g. see the conflicker stats.