One possible solution for this would be to whitelist the crawler's
User Agent by doing the following:
-determine the User Agent that the bot is sending with the request
-determine which rule(s) are triggering the Active Response
-write new child rule(s) that match the User Agent of the bot and
lower the severity level to prevent Active Response

Regards,
-- 
Doug Burks, GSE, CISSP
President, Greater Augusta ISSA
http://augusta.issa.org
http://securityonion.blogspot.com


On Tue, Feb 22, 2011 at 4:02 AM, Steve <[email protected]> wrote:
> I;ve been looking for a way to add domains to the whitelist to prevent
> active-response. I can see similar questions have been asked but I can
> not find any with an answer.
>
> The issue is active-response taking action against a web crawler
> (Google, etc) if they attempt to crawl many pages that no longer
> exist. Most search engines do not publish an IP range/block and
> require a host lookup.
>
> As I understand it the whitelist can take a set of IP addresses or an
> IP block, can it take a domain name e.g. googlebot.com
>
> If not has anyone successfully and safely found a way to use active-
> response without it resulting in blocking search engines?
>
> Steve

Reply via email to