[
https://issues.apache.org/jira/browse/MAPREDUCE-5817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13956926#comment-13956926
]
Hadoop QA commented on MAPREDUCE-5817:
--------------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12638107/mapreduce-5817.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 1 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 1.3.9) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:red}-1 core tests{color}. The patch failed these unit tests in
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app:
org.apache.hadoop.mapreduce.v2.app.TestMRAppMaster
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4476//testReport/
Console output:
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4476//console
This message is automatically generated.
> mappers get rescheduled on node transition even after all reducers are
> completed
> --------------------------------------------------------------------------------
>
> Key: MAPREDUCE-5817
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-5817
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Components: applicationmaster
> Affects Versions: 2.3.0
> Reporter: Sangjin Lee
> Assignee: Sangjin Lee
> Attachments: mapreduce-5817.patch
>
>
> We're seeing a behavior where a job runs long after all reducers were already
> finished. We found that the job was rescheduling and running a number of
> mappers beyond the point of reducer completion. In one situation, the job ran
> for some 9 more hours after all reducers completed!
> This happens because whenever a node transition (to an unusable state) comes
> into the app master, it just reschedules all mappers that already ran on the
> node in all cases.
> Therefore, if any node transition has a potential to extend the job period.
> Once this window opens, another node transition can prolong it, and this can
> happen indefinitely in theory.
> If there is some instability in the pool (unhealthy, etc.) for a duration,
> then any big job is severely vulnerable to this problem.
> If all reducers have been completed, JobImpl.actOnUnusableNode() should not
> reschedule mapper tasks. If all reducers are completed, the mapper outputs
> are no longer needed, and there is no need to reschedule mapper tasks as they
> would not be consumed anyway.
--
This message was sent by Atlassian JIRA
(v6.2#6252)