[ 
https://issues.apache.org/jira/browse/NUTCH-2328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15586425#comment-15586425
 ] 

Arthur B edited comment on NUTCH-2328 at 10/18/16 7:28 PM:
-----------------------------------------------------------

I don't think this has anything to do with Spring/Hadoop: it just recreates the 
issue. I believe you can recreate this from a standalone app that will submit 
the GeneratorJob twice to Hadoop (I recon we can disregard that aspect - Spring 
Data Hadoop). 
Regarding the locality of the {{count}} variable: I do not believe that turning 
it into a {{instance}} member would do here, unless I am missing something I do 
not think that it will be solving the issue. My reasoning for this is that a 
{{GeneratorJob}} once submitted to Hadoop, you can not count on the fact that 
it will only reside on one Hadoop node. Potentially the {{M/R job}} will run in 
multiple Hadoop nodes, and the {{M/R job}} should not have state as such (you 
can not count on its locality). So in my opinion the only solution is to have a 
cluster wide propagated {{count}} er that keeps track of how many {{Webpage}} s 
have been dealt with. 
AFAIK a local class {{instance}} variable would not do this (propagate a 
cluster wide counter)... unless I am missing something. It can be easily tested 
by running it on a 2 cluster machine and letting more than one Reducers run up. 


was (Author: arthur-evozon):
I don't think this has anything to do with Spring/Hadoop: it just recreates the 
issue. I believe you can recreate this from a standalone app that will submit 
the GeneratorJob twice to Hadoop (I recon we can disregard that aspect - Spring 
Data Hadoop). 
Regarding the locality of the {{count}} variable: I do not believe that turning 
it into a {{instance}} member would do here, unless I am missing something I do 
not think that it will be solving the issue. My reasoning for this is that a 
{{GeneratorJob}} once submitted to Hadoop, you can not count on the fact that 
it will only reside on one Hadoop node. Potentially the {{M/R job}} will run in 
multiple Hadoop nodes, and the {{M/R job}} should not have state as such (you 
can not count on its locality). So in my opinion the only solution is to have a 
cluster wide propagated {{count}} er that keeps track of how many {{Webpage}}s 
have been dealt with. 
AFAIK a local class {{instance}} variable would not do this (propagate a 
cluster wide counter)... unless I am missing something. It can be easily tested 
by running it on a 2 cluster machine and letting more than one Reducers run up. 

> GeneratorJob does not generate anything on second run
> -----------------------------------------------------
>
>                 Key: NUTCH-2328
>                 URL: https://issues.apache.org/jira/browse/NUTCH-2328
>             Project: Nutch
>          Issue Type: Bug
>          Components: generator
>    Affects Versions: 2.2, 2.3, 2.2.1, 2.3.1
>         Environment: Ubuntu 16.04 / Hadoop 2.7.1
>            Reporter: Arthur B
>              Labels: fails, generator, subsequent
>             Fix For: 2.4
>
>         Attachments: generator-issue-static-count.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Given a topN parameter (ie 10) the GeneratorJob will fail to generate 
> anything new on the subsequent runs within the same process space.
> To reproduce the issue submit the GeneratorJob twice one after another to the 
> M/R framework. Second time will say it generated 0 URLs.
> This issue is due to the usage of the static count field 
> (org.apache.nutch.crawl.GeneratorReducer#count) to determine if the topN 
> value has been reached.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to