[
https://issues.apache.org/jira/browse/HADOOP-1874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12533291
]
Hadoop QA commented on HADOOP-1874:
-----------------------------------
+1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12367307/1874.new.patch
against trunk revision r583037.
@author +1. The patch does not contain any @author tags.
javadoc +1. The javadoc tool did not generate any warning messages.
javac +1. The applied patch does not generate any new compiler warnings.
findbugs +1. The patch does not introduce any new Findbugs warnings.
core tests +1. The patch passed core unit tests.
contrib tests +1. The patch passed contrib unit tests.
Test results:
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/908/testReport/
Findbugs warnings:
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/908/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results:
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/908/artifact/trunk/build/test/checkstyle-errors.html
Console output:
http://lucene.zones.apache.org:8080/hudson/job/Hadoop-Patch/908/console
This message is automatically generated.
> lost task trackers -- jobs hang
> -------------------------------
>
> Key: HADOOP-1874
> URL: https://issues.apache.org/jira/browse/HADOOP-1874
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Affects Versions: 0.15.0
> Reporter: Christian Kunz
> Assignee: Devaraj Das
> Priority: Blocker
> Fix For: 0.15.0
>
> Attachments: 1874.new.patch, 1874.patch, lazy-dfs-ops.1.patch,
> lazy-dfs-ops.2.patch, lazy-dfs-ops.4.patch, lazy-dfs-ops.patch,
> server-throttle-hack.patch
>
>
> This happens on a 1400 node cluster using a recent nightly build patched with
> HADOOP-1763 (that fixes a previous 'lost task tracker' issue) running a
> c++-pipes job with 4200 maps and 2800 reduces. The task trackers start to get
> lost in high numbers at the end of job completion.
> Similar non-pipes job do not show the same problem, but is unclear whether it
> is related to c++-pipes. It could also be dfs overload when reduce tasks
> close and validate all newly created dfs files. I see dfs client rpc timeout
> exception. But this alone does not explain the escalation in losing task
> trackers.
> I also noticed that the job tracker becomes rather unresponsive with rpc
> timeout and call queue overflow exceptions. Job Tracker is running with 60
> handlers.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.