[
https://issues.apache.org/jira/browse/NUTCH-1513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13545691#comment-13545691
]
Tejas Patil commented on NUTCH-1513:
------------------------------------
Hi Lewis,
Thanks for your suggestion. I think that first migrating Http to
crawler-commons ([NUTCH-1031|https://issues.apache.org/jira/browse/NUTCH-1031])
and then coming back to this one will be better thing to do. I have done the
changes for Http and attached the patch to the respective Jira.
> Support Robots.txt for Ftp urls
> -------------------------------
>
> Key: NUTCH-1513
> URL: https://issues.apache.org/jira/browse/NUTCH-1513
> Project: Nutch
> Issue Type: Improvement
> Affects Versions: 1.7, 2.2
> Reporter: Tejas Patil
> Assignee: Lewis John McGibbney
> Priority: Minor
> Labels: robots.txt
>
> As per [0], a FTP website can have robots.txt like [1]. In the nutch code,
> Ftp plugin is not parsing the robots file and accepting all urls.
> In "_src/plugin/protocol-ftp/src/java/org/apache/nutch/protocol/ftp/Ftp.java_"
> {noformat} public RobotRules getRobotRules(Text url, CrawlDatum datum) {
> return EmptyRobotRules.RULES;
> }{noformat}
> Its not clear of this was part of design or if its a bug.
> [0] :
> https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt
> [1] : ftp://example.com/robots.txt
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira