Sandy Ryza commented on YARN-3485:

It looks like the patch computes the headroom as min(cluster total - cluster 
consumed, queue max resource).  Do we not want it to be min(cluster total - 
cluster consumed, queue max resource - queue consumed)?

> FairScheduler headroom calculation doesn't consider maxResources for Fifo and 
> FairShare policies
> ------------------------------------------------------------------------------------------------
>                 Key: YARN-3485
>                 URL: https://issues.apache.org/jira/browse/YARN-3485
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: fairscheduler
>    Affects Versions: 2.7.0
>            Reporter: Karthik Kambatla
>            Assignee: Karthik Kambatla
>            Priority: Critical
>         Attachments: yarn-3485-1.patch, yarn-3485-prelim.patch
> FairScheduler's headroom calculations consider the fairshare and 
> cluster-available-resources, and the fairshare has maxResources. However, for 
> Fifo and Fairshare policies, the fairshare is used only for memory and not 
> cpu. So, the scheduler ends up showing a higher headroom than is actually 
> available. This could lead to applications waiting for resources far longer 
> than then intend to. e.g. MAPREDUCE-6302.

This message was sent by Atlassian JIRA

Reply via email to