[ 
https://issues.apache.org/jira/browse/NUTCH-1031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tejas Patil updated NUTCH-1031:
-------------------------------

    Attachment: NUTCH-1031-trunk.v3.patch

Hi [~wastl-nagel], I have done the suggested changes.

@[~amuseme.lu] : #1 done. As the newer version of CC aint released publically, 
(I cannot see it over the project page or in maven), I am avoiding #2 for now. 
For #3, I am not keen about creating a robots plugin as robots check is 
something which is mandatory for every crawler. Hence I have kept 
RobotRulesParser class as core. However, the protocol specific robots 
implementations (currently HttpRobotRulesParser is added in this patch) are 
inside the respective protocol plugins.
                
> Delegate parsing of robots.txt to crawler-commons
> -------------------------------------------------
>
>                 Key: NUTCH-1031
>                 URL: https://issues.apache.org/jira/browse/NUTCH-1031
>             Project: Nutch
>          Issue Type: Task
>            Reporter: Julien Nioche
>            Assignee: Tejas Patil
>            Priority: Minor
>              Labels: robots.txt
>             Fix For: 1.7
>
>         Attachments: CC.robots.multiple.agents.patch, 
> CC.robots.multiple.agents.v2.patch, NUTCH-1031-trunk.v2.patch, 
> NUTCH-1031-trunk.v3.patch, NUTCH-1031.v1.patch
>
>
> We're about to release the first version of Crawler-Commons 
> [http://code.google.com/p/crawler-commons/] which contains a parser for 
> robots.txt files. This parser should also be better than the one we currently 
> have in Nutch. I will delegate this functionality to CC as soon as it is 
> available publicly

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to