[ 
https://issues.apache.org/jira/browse/YARN-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15069175#comment-15069175
 ] 

Naganarasimha G R commented on YARN-3215:
-----------------------------------------

Hi [~wangda], As i was discussing with you offline wrt YARN-4225, this jira 
would be important in a multitenant scenario. As tenant needs to know how much 
head room is available to them. So i want to assign this issue to myself
Current Behavior, It gives the headroom of the default partition of a queue, 
for apps. And if the default partition has no nodes configured then minimum 
allocation cores and mb will be given as headroom. And also more erroneous 
situation is when the Default partition size is very large than a particular 
other partition, then  Head room sent to app is larger than what it can use it 
might lead to hanging.
I initially propose to send head room for all partitions accessible to a queue 
to the app. Will try to work on POC patch and share at the earliest

> Respect labels in CapacityScheduler when computing headroom
> -----------------------------------------------------------
>
>                 Key: YARN-3215
>                 URL: https://issues.apache.org/jira/browse/YARN-3215
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: capacityscheduler
>            Reporter: Wangda Tan
>            Assignee: Wangda Tan
>
> In existing CapacityScheduler, when computing headroom of an application, it 
> will only consider "non-labeled" nodes of this application.
> But it is possible the application is asking for labeled resources, so 
> headroom-by-label (like 5G resource available under node-label=red) is 
> required to get better resource allocation and avoid deadlocks such as 
> MAPREDUCE-5928.
> This JIRA could involve both API changes (such as adding a 
> label-to-available-resource map in AllocateResponse) and also internal 
> changes in CapacityScheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to