[ 
https://issues.apache.org/jira/browse/YARN-3416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14386992#comment-14386992
 ] 

Ray Chiang commented on YARN-3416:
----------------------------------

There probably is a bug here, but what value do you have for the property:

  mapreduce.job.reduce.slowstart.completedmaps

in mapred-default.xml?  If it's close to 0.0, I'd possibly suggest increasing 
it closer to 1.0 in order to keep the number of pending reducers down.  This 
will likely have a performance hit, but should at least allow your job to 
complete.

> deadlock in a job between map and reduce cores allocation 
> ----------------------------------------------------------
>
>                 Key: YARN-3416
>                 URL: https://issues.apache.org/jira/browse/YARN-3416
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: fairscheduler
>    Affects Versions: 2.6.0
>            Reporter: mai shurong
>
> I submit a  big job, which has 500 maps and 350 reduce, to a 
> queue(fairscheduler) with 300 max cores. When the big mapreduce job is 
> running 100% maps, the 300 reduces have occupied 300 max cores in the queue. 
> And then, a map fails and retry, waiting for a core, while the 300 reduces 
> are waiting for failed map to finish. So a deadlock occur. As a result, the 
> job is blocked, and the later job in the queue cannot run because no 
> available cores in the queue.
> I think there is the similar issue for memory of a queue .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to