[
https://issues.apache.org/jira/browse/MAPREDUCE-6302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Karthik Kambatla reassigned MAPREDUCE-6302:
-------------------------------------------
Assignee: Karthik Kambatla
> deadlock in a job between map and reduce cores allocation
> ----------------------------------------------------------
>
> Key: MAPREDUCE-6302
> URL: https://issues.apache.org/jira/browse/MAPREDUCE-6302
> Project: Hadoop Map/Reduce
> Issue Type: Bug
> Affects Versions: 2.6.0
> Reporter: mai shurong
> Assignee: Karthik Kambatla
> Priority: Critical
> Attachments: AM_log_head100000.txt.gz, AM_log_tail100000.txt.gz,
> queue_with_max163cores.png, queue_with_max263cores.png,
> queue_with_max333cores.png
>
>
> I submit a big job, which has 500 maps and 350 reduce, to a
> queue(fairscheduler) with 300 max cores. When the big mapreduce job is
> running 100% maps, the 300 reduces have occupied 300 max cores in the queue.
> And then, a map fails and retry, waiting for a core, while the 300 reduces
> are waiting for failed map to finish. So a deadlock occur. As a result, the
> job is blocked, and the later job in the queue cannot run because no
> available cores in the queue.
> I think there is the similar issue for memory of a queue .
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)