[ 
https://issues.apache.org/jira/browse/YARN-2604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14220199#comment-14220199
 ] 

Karthik Kambatla commented on YARN-2604:
----------------------------------------

Thanks Robert. One more thing I missed - we need to handle vcores in addition 
to memory. I was hoping this would come for free with "Resource" suggestion, 
but from looking at the code, I think we should handle vcores alongside memory 
the way the patch does now. 

> Scheduler should consider max-allocation-* in conjunction with the largest 
> node
> -------------------------------------------------------------------------------
>
>                 Key: YARN-2604
>                 URL: https://issues.apache.org/jira/browse/YARN-2604
>             Project: Hadoop YARN
>          Issue Type: Improvement
>          Components: scheduler
>    Affects Versions: 2.5.1
>            Reporter: Karthik Kambatla
>            Assignee: Robert Kanter
>         Attachments: YARN-2604.patch, YARN-2604.patch, YARN-2604.patch, 
> YARN-2604.patch, YARN-2604.patch
>
>
> If the scheduler max-allocation-* values are larger than the resources 
> available on the largest node in the cluster, an application requesting 
> resources between the two values will be accepted by the scheduler but the 
> requests will never be satisfied. The app essentially hangs forever. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to