[ 
https://issues.apache.org/jira/browse/YARN-2413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14094897#comment-14094897
 ] 

Allen Wittenauer commented on YARN-2413:
----------------------------------------

What we're seeing with the default settings (as opposed to the fabricated 
numbers above... they just help make the problem evident) is that hundreds of 
containers can get allocated on the same node because the cap scheduler isn't 
taking into consideration the core count at all.  This obviously leads to a 
massive performance breakdowns, especially if a failure scenario happens where 
multiple NMs die.

> capacity scheduler will overallocate vcores
> -------------------------------------------
>
>                 Key: YARN-2413
>                 URL: https://issues.apache.org/jira/browse/YARN-2413
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: scheduler
>    Affects Versions: 3.0.0, 2.2.0
>            Reporter: Allen Wittenauer
>            Priority: Critical
>
> It doesn't appear that the capacity scheduler is properly allocating vcores 
> when making scheduling decisions, which may result in overallocation of CPU 
> resources.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to