[ 
https://issues.apache.org/jira/browse/NUTCH-1331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13537814#comment-13537814
 ] 

Hudson commented on NUTCH-1331:
-------------------------------

Integrated in nutch-trunk-maven #533 (See 
[https://builds.apache.org/job/nutch-trunk-maven/533/])
    NUTCH-1331 limit crawler to defined depth (jnioche) (Revision 1424875)

     Result = SUCCESS
jnioche : 
Files : 
* /nutch/trunk/CHANGES.txt
* /nutch/trunk/conf/nutch-default.xml
* /nutch/trunk/src/plugin/build.xml
* /nutch/trunk/src/plugin/scoring-depth
* /nutch/trunk/src/plugin/scoring-depth/build.xml
* /nutch/trunk/src/plugin/scoring-depth/ivy.xml
* /nutch/trunk/src/plugin/scoring-depth/plugin.xml
* /nutch/trunk/src/plugin/scoring-depth/src
* /nutch/trunk/src/plugin/scoring-depth/src/java
* /nutch/trunk/src/plugin/scoring-depth/src/java/org
* /nutch/trunk/src/plugin/scoring-depth/src/java/org/apache
* /nutch/trunk/src/plugin/scoring-depth/src/java/org/apache/nutch
* /nutch/trunk/src/plugin/scoring-depth/src/java/org/apache/nutch/scoring
* /nutch/trunk/src/plugin/scoring-depth/src/java/org/apache/nutch/scoring/depth
* 
/nutch/trunk/src/plugin/scoring-depth/src/java/org/apache/nutch/scoring/depth/DepthScoringFilter.java

                
> limit crawler to defined depth
> ------------------------------
>
>                 Key: NUTCH-1331
>                 URL: https://issues.apache.org/jira/browse/NUTCH-1331
>             Project: Nutch
>          Issue Type: New Feature
>          Components: generator, parser, storage
>    Affects Versions: 1.4
>            Reporter: behnam nikbakht
>             Fix For: 1.7
>
>         Attachments: NUTCH-1331.patch, NUTCH-1331-v2.patch
>
>
> there is a need to limit crawler to some defined depth, and importance of 
> this option is to avoid crawling of infinite loops, with dynamic generated 
> urls, that occur in some sites, and to optimize crawler to select important 
> urls.
> an option is define a iteration limit on generate,fetch,parse,updatedb cycle, 
> but it works only if in each cycle, all of unfetched urls become fetched, 
> (without recrawling them and with some other considerations)
> we can define a new parameter in CrawlDatum, named depth, and like score-opic 
> algorithm, compute depth of a link after parse, and in generate, only select 
> urls with valid depth.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to