mai shurong commented on YARN-3416:

I found a new case today. I submitted a more larger job with 5800 maps and 380 
reduces to a queue which has max 263 cores. Even though no map fail, a deadlock 
of map and reduce cores allocation always occured when I tried several times.  
And I tried to submitted to other queues, as long as reduces of a job is more 
than max cores of the queue , deadlock always happened. 
I attach the printscreens of deadlock jobs, and attach the head 100000 line log 
(AM_log_head100000.txt.gz) and tail 100000 line (AM_log_tail100000.txt.gz) of 
AM log of one deadlock job.

The parameter mapreduce.job.reduce.slowstart.completedmaps is 0.5.

> deadlock in a job between map and reduce cores allocation 
> ----------------------------------------------------------
>                 Key: YARN-3416
>                 URL: https://issues.apache.org/jira/browse/YARN-3416
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: fairscheduler
>    Affects Versions: 2.6.0
>            Reporter: mai shurong
>            Priority: Critical
>         Attachments: AM_log_head100000.txt.gz, AM_log_tail100000.txt.gz, 
> queue_with_max163cores.png, queue_with_max263cores.png, 
> queue_with_max333cores.png
> I submit a  big job, which has 500 maps and 350 reduce, to a 
> queue(fairscheduler) with 300 max cores. When the big mapreduce job is 
> running 100% maps, the 300 reduces have occupied 300 max cores in the queue. 
> And then, a map fails and retry, waiting for a core, while the 300 reduces 
> are waiting for failed map to finish. So a deadlock occur. As a result, the 
> job is blocked, and the later job in the queue cannot run because no 
> available cores in the queue.
> I think there is the similar issue for memory of a queue .

This message was sent by Atlassian JIRA

Reply via email to