Thomas Graves commented on YARN-1631:

we need to be careful with this.  You could end up starving out the first 
application.  It definitely changes current semantics.

What version of hadoop are you seeing this issue? With my patch for 
reservations continue looking it should actually look at node 2 and take that 
one and unreserve node 1.  There is the logic for the needsContainer that might 
be affecting this that I would have to look at more.

> Container allocation issue in Leafqueue assignContainers()
> ----------------------------------------------------------
>                 Key: YARN-1631
>                 URL: https://issues.apache.org/jira/browse/YARN-1631
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: scheduler
>    Affects Versions: 2.2.0
>         Environment: SuSe 11 Linux 
>            Reporter: Sunil G
>            Assignee: Sunil G
>         Attachments: Yarn-1631.1.patch, Yarn-1631.2.patch
> Application1 has a demand of 8GB[Map Task Size as 8GB] which is more than 
> Node_1 can handle.
> Node_1 has a size of 8GB and 2GB is used by Application1's AM.
> Hence reservation happened for remaining 6GB in Node_1 by Application1.
> A new job is submitted with 2GB AM size and 2GB task size with only 2 Maps to 
> run.
> Node_2 also has 8GB capability.
> But Application2's AM cannot be launched in Node_2. And Application2 waits 
> longer as only 2 Nodes are available in cluster.

This message was sent by Atlassian JIRA

Reply via email to