[
https://issues.apache.org/jira/browse/NUTCH-1927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14497251#comment-14497251
]
Sebastian Nagel commented on NUTCH-1927:
----------------------------------------
Hi Chris, the class WhiteListRobotRules seems to me still overly complex. It
should be possible to keep the cache as is and only put a reference to
light-weight singleton RobotRules object (such as created by the default
constructor of the WhiteListRobotRules) in case a host is whitelisted.
Also I do not understand why getCrawlDelay() needs to store the last URL: the
Crawl-Delay specified in the robots.txt can be used to override the default
delay/interval when a robot/crawler accesses the same host successively: it's a
fixed value and does not depend on any previous fetches.
Don't know whether this is a problem: we (almost) everywhere use
org.slf4j.Logger and not java.util.logging.Logger.
> Create a whitelist of IPs/hostnames to allow skipping of RobotRules parsing
> ---------------------------------------------------------------------------
>
> Key: NUTCH-1927
> URL: https://issues.apache.org/jira/browse/NUTCH-1927
> Project: Nutch
> Issue Type: New Feature
> Components: fetcher
> Reporter: Chris A. Mattmann
> Assignee: Chris A. Mattmann
> Labels: available, patch
> Fix For: 1.10
>
> Attachments: NUTCH-1927.Mattmann.041115.patch.txt,
> NUTCH-1927.Mattmann.041215.patch.txt, NUTCH-1927.Mattmann.041415.patch.txt
>
>
> Based on discussion on the dev list, to use Nutch for some security research
> valid use cases (DDoS; DNS and other testing), I am going to create a patch
> that allows a whitelist:
> {code:xml}
> <property>
> <name>robot.rules.whitelist</name>
> <value>132.54.99.22,hostname.apache.org,foo.jpl.nasa.gov</value>
> <description>Comma separated list of hostnames or IP addresses to ignore
> robot rules parsing for.
> </description>
> </property>
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)