[
https://issues.apache.org/jira/browse/MAPREDUCE-2324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13069127#comment-13069127
]
[email protected] commented on MAPREDUCE-2324:
----------------------------------------------------------
-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/1164/
-----------------------------------------------------------
Review request for hadoop-mapreduce, Todd Lipcon, Tom Graves, and Jeffrey
Naisbitt.
Summary
-------
Job should fail if a reduce task can't be scheduled anywhere. V2 of the patch.
This addresses bug MAPREDUCE-2324.
https://issues.apache.org/jira/browse/MAPREDUCE-2324
Diffs
-----
branches/branch-0.20-security/src/mapred/org/apache/hadoop/mapred/JobInProgress.java
1148035
branches/branch-0.20-security/src/mapred/org/apache/hadoop/mapred/TaskTracker.java
1148035
branches/branch-0.20-security/src/test/org/apache/hadoop/mapred/MiniMRCluster.java
1148035
branches/branch-0.20-security/src/test/org/apache/hadoop/mapred/TestTaskLimits.java
1148035
Diff: https://reviews.apache.org/r/1164/diff
Testing
-------
Unit tests and ran manual tests on a single node cluster.
Thanks,
Robert
> Job should fail if a reduce task can't be scheduled anywhere
> ------------------------------------------------------------
>
> Key: MAPREDUCE-2324
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-2324
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Affects Versions: 0.20.2, 0.20.205.0
> Reporter: Todd Lipcon
> Assignee: Robert Joseph Evans
> Fix For: 0.20.205.0
>
> Attachments: MR-2324-security-v1.txt, MR-2324-security-v2.txt
>
>
> If there's a reduce task that needs more disk space than is available on any
> mapred.local.dir in the cluster, that task will stay pending forever. For
> example, we produced this in a QA cluster by accidentally running terasort
> with one reducer - since no mapred.local.dir had 1T free, the job remained in
> pending state for several days. The reason for the "stuck" task wasn't clear
> from a user perspective until we looked at the JT logs.
> Probably better to just fail the job if a reduce task goes through all TTs
> and finds that there isn't enough space.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira