Willy,

Thank you for your quick response.

I went with setting cookies approach, selecting the proper backend based on the cookie, I'm not doing any redirect, if the cookie is set HAProxy forward the request to backend A, if not forward to backend B. Both backends A and B are similar and tries to set a cookie on the client browser, so the end user wont feel any difference and wont be bothered with a redirect, if any attack happens, backend B will be affected, regular users with the cookie will go to backend A.

I've tested it with ab to stress the frontend, and it works pretty well. backend A was not affected at all while backend B was busy with many requests.

Thanks for the important tips.

Regards,

Ahmad

Willy Tarreau wrote:
On Tue, Feb 24, 2009 at 07:43:53PM +0300, Ahmad Al-Ibrahim wrote:
Hi,

I'm using HAProxy in the frontend as a reverse proxy to backend servers, I'm thinking of possible ways to protect backend servers from being attacked.
How effective is doing url redirect to protect against these attacks?

it will stop all stupid bots which don't even care about parsing the
response. You can even improve the setup by setting a cookie and
checking it in response. The idea is that if the cookie is there and
valid, you forward the traffic to the proper backend. Otherwise you
perform a redirect with a set-cookie.

Or balancing based on URI?

This will most often overload one server which matches the URI being
attacked. In most of the DDoS traces I got, only one URI was being
requested by thousands of clients.

How about using cookies? for example Logged in users with Cookie A goes to backend group A and clients with no cookie set go to backend group x

see above ;-)
Keep in mind that some people don't like this solution because they
fear that some people will not get the cookie. I've once set up a
2-steps redirect for that. The principle is easy :

   1) if uri = /XXX and no cookie, return error page
   2) redirect to /XXX with set-cookie if no cookie
   3) if uri = /XXX, redirect to /

/XXX will catch clients which don't support cookies and gently return
them an error. One could also decide to slow them down but granting
them access to the service, or limiting their number of connections
(dedicated backend).

There is conn tarpit, how effective it is? and how it can be used to protect against DDoS attacks?

It is very effective. I developped it in a hurry to help one guy
whose site was set down by a medium-sized attack. Once the tarpit
was installed with a proper criteria, we observed the number of
concurrent connection go up to stabilize at about 7K, and the load
on the servers and the frontend firewall dropped.

The tarpit was developped precisely to protect the frontend firewall
and the internet link, because most attack tools will simply run a
dirty request in a loop and can't parallelize them. So if you're
slowing down an attacker to one request per minute, you're saved.

The difficulty is to find the matching criteria. You have to check
your servers logs to see what causes them trouble, and if you can't
blacklist the uri itself, you often have to fire up tcpdump. It's
very common to find an uncommon header, a wrong syntax or something
like this in the request. You then use that to decide to tarpit the
request.

What is the most effective way to protect against such DDoS attacks?

there's no most effective way. There's only a combination of tools
which have an efficiency depending on the attacker's skills and
knowledge of your counter-measures. So there are a number of important
rules to keep in mind :

   1) you have the logs, the attacker does not. Exploit them to the
      maximum to elaborate the smartest possible matching.

   2) you know what he does and he does not know what you do. You
      must never let the attacker know what you're doing nor how you
      plan to stop him. Reporting wrong information is nice too. For
      instance, the tarpit will return 500 after the timeout with a
      fake server error. But you can also decide to sacrifice a server
      and send all identified crap to it. This is very important
      because every method of filtering will have limits which can
      easily be bypassed with a few minutes or hours of coding once
      understood. You must ensure that your attacker does not even
      know what products you are using.

   3) he knows who you are (IP) and you don't know behind what IP he
      hides. This is the problematic part because you don't want to
      block your customers on your site.

   4) you have to constantly monitor your systems and adapt the
      response to the attack in real time. This prevents the attacker
      from getting a precise idea of your architecture and components,
      which would serve him to build an effective attack. Also, you'll
      have to adjust system tuning (eg: number of SYN/ACK retries,
      timeouts, etc...), balancing between protection efficiency and
      the site accessibility for normal users.

   5) you must not publicly tell your customers that you're being
      attacked because if the attacker sees that, he will think
      "hey, they can't stand it anymore, they're about to give up",
      and they will continue. However, stating that "the site is
      slow due to a transient network issue" is fine.

   6) never over-estimate the capability of any of your components,
      and do not hesitate to replace one which does not fit anymore.
      For instance, if you use haproxy and you see the attack is
      smart enough to kick it off, put something in front of it,
      replace it or find any trick to quickly solve the issue.
      Source-based layer 4 balancing to many L7 proxies is very
      effective BTW. If you can stuff 10 machines with haproxy,
      each of which will sustain 50000 concurrent connections,
      and a layer4 LB in front of them, you can sustain 500K
      concurrent L7 connections (including tarpit) for not that
      much of money.

   7) the internet link can quickly become the bottleneck, so you
      have to push the attack back to the attackers at the earliest
      possible opportunity. Randomly dropping incoming packets is
      wrong because those packets will be retransmitted, thus will
      increase the link volume. However, redirections, tarpits and
      such things are fine because you can often limit the amount
      of traffic exchanged with an attacker. But once the link is
      full, you're hosed. Today, a Gig pipe can be filled by only
      1000 ADSL2 clients sending dirty traffic, so this type of
      attack is very very cheap.

   8) save all your confs before starting to change settings,
      otherwise you'll forget to restore a lof of them once the
      attack is over.


Regards,
Willy



Reply via email to