[ 
https://issues.apache.org/jira/browse/YARN-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17305144#comment-17305144
 ] 

Eric Payne commented on YARN-6538:
----------------------------------

[~novaboy], please provide a specific use case to reproduce this issue. For 
example, please provide cluster size and applicable queue configuration 
parameters:
number of queues, queue capacities, queue max capacities, queue user limit 
factors, queue minimum user limit percents, queue ordering policies, preemption 
parameters for each queue, etc.

> Inter Queue preemption is not happening when DRF is configured
> --------------------------------------------------------------
>
>                 Key: YARN-6538
>                 URL: https://issues.apache.org/jira/browse/YARN-6538
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: capacity scheduler, scheduler preemption
>    Affects Versions: 2.8.0
>            Reporter: Sunil G
>            Assignee: Sunil G
>            Priority: Major
>
> Cluster capacity of <memory:3TB, vCores:168>. Here memory is more and vcores 
> are less. If applications have more demand, vcores might be exhausted. 
> Inter queue preemption ideally has to be kicked in once vcores is over 
> utilized. However preemption is not happening.
> Analysis:
> In {{AbstractPreemptableResourceCalculator.computeFixpointAllocation}}, 
> {code}
>     // assign all cluster resources until no more demand, or no resources are
>     // left
>     while (!orderedByNeed.isEmpty() && Resources.greaterThan(rc, totGuarant,
>         unassigned, Resources.none())) {
> {code}
>  will loop even when vcores are 0 (because memory is still +ve). Hence we are 
> having more vcores in idealAssigned which cause no-preemption cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to