Hi!

2011/7/26 Willy Tarreau <w...@1wt.eu>


>  > There is an option in Del''s servers to install Intel's 10Gbit
> > NIC - it is working a way faster (x3, x5) then broadcom.
>
> Intel's 10G NICs are fast, but generally hard to tune, you have a hard
> trade-off between data rate and latency. Basically you tune them for
> large or small packets but not both. Still for DDoS they might be well
> suited.
>
>
At least they are well suited for tuning which is only thing one may want
when something is not working.


>  > But anyway it is
> > impossible to have one server to filter 10Gbit DDoS attack, at least it
> is
> > very far from default configuration, so I failed to find solution.
>
> 10G DDoS is something very hard to resist !
>
>
But here, in the year 2011 it is not that difficult to run such an attack.:(


>  > But as haproxy is really good software to handle DDoS you can use
> amazon's
> > cloud. And that was my final solution. Amazon's servers are taking
> 400Mbit
> > per node, so installing 60 EC2 nodes I've managed to filter DDoS traffic
> at
> > 24Gbit/s (It was somewhere around 10*10^6 requests per second, or
> ~150-160k
> > session rate per node) without any problems. Yes, there was some
> balancing
> > issues, and some failures for couple of minutes, but it's nothing
> comparing
> > to typical DDoS expirience.
>
> Wow! I'm impressed! But did you perform the tests from within Amazon's
> cloud
> or from the outside ? I don't know what their external peering is, and I'm
> wondering how you managed to send 24 Gbps over the internet. Also, what's
> the cost of running 60 of these nodes ?
>

We've tested from the outside. In fact that was a real attack. Botnet
consisted of ~10-12k bots each opening 1000 connections/second. There were
some failures, when amazon's load balancers responded 'unknown failure', but
things were getting back to normal in a few minutes. At peak we had 60 nodes
each running at 50Mbyte/s, if you want I can make a screenshots from
amazon's console with stats and send it to you.

As to the costs...  running one hour of such ec2 node is $0.38 * 60 = $22/h,
and traffic is $0.13/Gb - this is the most expensive part.



> > And question to Willy: What hardware in your opinion is the best to run
> > haproxy on to serve lot's of HTTP requests 99% percent of which are trash
> to
> > be blocked?
>
> My best results were achieved on Core i5 3.3 GHz (dual core, 4 threads). In
> short, I bind network IRQs to core zero and haproxy to core one and the two
> remaining threads ensure there is some rope left for system management,
> SSH,
> etc. With this you can run at 100% CPU all the day if you want. I managed
> to
> get haproxy to dynamically filter 300k connections per second on such a
> configuration. That's SYN, SYN/ACK, ACK, RST, based on a source IP's
> connection
> rate.
>
> How may client IPs you were testing? I guess when number of clients reaches
some high value there could be problems with CPU cache because tables are
growing?



> Depending on the type of DDoS you're dealing with, it may be worth
> distributing
> the NIC's IRQs to multiple cores so that the kernel's work can scale (eg:
> emit
> a SYN/ACK). It might even be worth having multiple haproxy processes,
> because
> when you're blocking a DDoS, you don't really mind about monitoring, stats,
> health checks, etc... You only want your machine to be as fast as possible,
> and
> doing so is possible with multi-queues. You then need to spread your NICs
> IRQs
> to all cores and bind as many haproxy processes as cores (and maually pin
> them).
> The CPU-NIC affinity will not be good on the backend but that's not the
> issue
> since you're dealing with a frontend where you want to reject most of your
> traffic.
>
>
That's true. I've ended trying to filter attacks on Dell R510 with multiple
haproxy and irqbalance running. And about correct statistics on multiple
processes - why not use shared memory to store stats? Will it affect
performance because transferring memory blocks between cores?


> Best regards,
> Willy
>
>

Reply via email to