I would also be very interested in the learned opinions of the other readers
of this list on this topic.

When I first considered this in the HAProxy context a few weeks ago I
figured that implementing this functionality in HAProxy or putting it
in-line with HAProxy on the same server would be a bottleneck and that
modsecurity implemented on the web servers would scale better. This
(pretentiously) assumes high volumes.

If you do want to implement filtering in front of the web servers - i.e.
implement some form of web application firewall (WAF) -  then I believe it
is possible to do this with modsecurity using Apache as a proxy.

I designed and managed the implementation of a WAF in the late 90's before
they were commercially available items. This sat in front of the web servers
at a financial institution for over 10 years and stopped all the automated
threats. Relatively simple white-lists can be very effective in this context
and can be largely independent of the applications although, obviously, some
inspection of the HTTP traffic is required. At this simple level you can:

* Strip headers that are not in the white-list
* Inspect URIs for invalid characters
* Reject methods you don't want to deal with.
* Inspect POST bodies for invalid characters (although file uploads can
present problems here)

Adding application knowledge is a balancing act between the configuration
overheads and how much of these overheads application developers can
stomach, but it greatly increases the effectiveness of the firewall. I
generally assume that the application developers will not be interested in
security (if not right now then at some time in the future) and that the WAF
is the belt even if they don't supply the braces :-) At this level you might
be able to restrict methods and parameters (in the body of a POST or in the
query string) to specific URIs.

I'm not a great fan of 'learned' behaviour in tools like this, I much prefer
explicit testable rules that do not vary with user behaviour or browsing
history. 

That said, there are some things that can be picked up from inspection of
outgoing traffic and applied to responses. These include:

* Parameters that can come back and whether they are in the URI or the body
of a POST
* Maximum lengths if they are specified as part of the outgoing HTML
* The expected method associated with the response

Adding a cookie going out and using it to index into the session coming back
facilitates this if you are doing it dynamically. You can also specify most
of this up front.

This is not too deep and will stop most Web 1.x nasties. Web 2.x.... another
story - I haven't had to worry about it :-)

Cheers
Andrew



-----Original Message-----
From: Olivier Le Cam [mailto:[email protected]] 
Sent: Wednesday, 17 March 2010 12:07 AM
To: [email protected]
Subject: mod_security and/or fail2ban

Hi -

I am exploring various solutions in order to implement some filtering 
features (even basic ones) at the haproxy side. My goal is to get rid of 
the most popular bots and vulnerability scanners.

Would someone aware of a way to perform such a filter with haproxy 
using, say, the modsecurity CRS?

Another alternative could be to scan the haproxy logs with fail2ban or 
equivalent. I was wondering if that could be a satisfying enough and if 
some fail2ban rulesets could be already available for that.

Thanks in anticipation for any idea/pointers!

-- 
Olivier Le Cam
Département des Technologies de l'Information et des Communications
Académie de Versailles


Reply via email to