This is old.  It has been fixed in more recent versions of hadoop and nutch.

Otis Gospodnetic (JIRA) wrote:
[ https://issues.apache.org/jira/browse/NUTCH-675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12658610#action_12658610 ]
Otis Gospodnetic commented on NUTCH-675:
----------------------------------------

Sha Feng, could you please bring this up on the Nutch mailing list instead of 
JIRA?
It would also be good if you could upgrade your Nutch (including Hadoop) and 
see if it works then.  0.12 is VERY old version of Hadoop.


Reduce tasks do not report their status and are killed by jobtracker
--------------------------------------------------------------------

                Key: NUTCH-675
                URL: https://issues.apache.org/jira/browse/NUTCH-675
            Project: Nutch
         Issue Type: Bug
         Components: fetcher
   Affects Versions: 0.9.0
        Environment: OS : Linux
           Reporter: sha feng
            Fix For: 0.9.0


We choose Fetcher2 as our fetcher. Map tasks of Fetcher2 fetches about 2,000,000 urls, but at reduce stage, all reduce tasks can not report their status and be killed by jobtracker. Although we change mapred.task.timeout from 60,000 to 1,800,000, it does not work. So, who can tell us why? By the way, the version of Nutch we use is 0.9 and the version of Hadoop is 0.12. Thanks for your help!

Reply via email to