[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14950519#comment-14950519
 ] 

Karthik Kambatla edited comment on MAPREDUCE-6302 at 10/9/15 2:58 PM:
----------------------------------------------------------------------

Just committed to trunk and branch-2. branch-2 had a conflict, attached is the 
submitted patch. Verified the trunk and branch-2 patches don't differ in code. 

Thanks everyone for providing inputs, Jason and Anubhav for your reviews. Glad 
to resolve this long-standing issue. 


was (Author: kasha):
Just committed to trunk and branch-2. branch-2 had a conflict, attached is the 
submitted patch. Verified the trunk and branch-2 patches don't differ in code. 

Thanks everyone from providing inputs, Jason and Anubhav for your reviews. Glad 
to resolve this long-standing issue. 

> Preempt reducers after a configurable timeout irrespective of headroom
> ----------------------------------------------------------------------
>
>                 Key: MAPREDUCE-6302
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6302
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>    Affects Versions: 2.6.0
>            Reporter: mai shurong
>            Assignee: Karthik Kambatla
>            Priority: Critical
>             Fix For: 2.8.0
>
>         Attachments: AM_log_head100000.txt.gz, AM_log_tail100000.txt.gz, 
> log.txt, mr-6302-1.patch, mr-6302-2.patch, mr-6302-3.patch, mr-6302-4.patch, 
> mr-6302-5.patch, mr-6302-6.patch, mr-6302-7.patch, mr-6302-prelim.patch, 
> mr-6302_branch-2.patch, queue_with_max163cores.png, 
> queue_with_max263cores.png, queue_with_max333cores.png
>
>
> I submit a  big job, which has 500 maps and 350 reduce, to a 
> queue(fairscheduler) with 300 max cores. When the big mapreduce job is 
> running 100% maps, the 300 reduces have occupied 300 max cores in the queue. 
> And then, a map fails and retry, waiting for a core, while the 300 reduces 
> are waiting for failed map to finish. So a deadlock occur. As a result, the 
> job is blocked, and the later job in the queue cannot run because no 
> available cores in the queue.
> I think there is the similar issue for memory of a queue .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to