[
https://issues.apache.org/jira/browse/YARN-7149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16157212#comment-16157212
]
Wangda Tan commented on YARN-7149:
----------------------------------
And the other place we need to change is:
{{LQ#getTotalPendingResourcesConsideringUserLimit}}.
Instead of do a strict headroom: {{headroom = userLimitResource - user.used}}
We should relax this computation as well. We should give at least one container
when headroom >= 0. Otherwise it makes preemption logic inconsistent to
allocation logic.
> Cross-queue preemption sometimes starves an underserved queue
> -------------------------------------------------------------
>
> Key: YARN-7149
> URL: https://issues.apache.org/jira/browse/YARN-7149
> Project: Hadoop YARN
> Issue Type: Bug
> Components: capacity scheduler
> Affects Versions: 2.9.0, 3.0.0-alpha3
> Reporter: Eric Payne
> Assignee: Eric Payne
> Attachments: YARN-7149.demo.unit-test.patch
>
>
> In branch 2 and trunk, I am consistently seeing some use cases where
> cross-queue preemption does not happen when it should. I do not see this in
> branch-2.8.
> Use Case:
> | | *Size* | *Minimum Container Size* |
> |MyCluster | 20 GB | 0.5 GB |
> | *Queue Name* | *Capacity* | *Absolute Capacity* | *Minimum User Limit
> Percent (MULP)* | *User Limit Factor (ULF)* |
> |Q1 | 50% = 10 GB | 100% = 20 GB | 10% = 1 GB | 2.0 |
> |Q2 | 50% = 10 GB | 100% = 20 GB | 10% = 1 GB | 2.0 |
> - {{User1}} launches {{App1}} in {{Q1}} and consumes all resources (20 GB)
> - {{User2}} launches {{App2}} in {{Q2}} and requests 10 GB
> - _Note: containers are 0.5 GB._
> - Preemption monitor kills 2 containers (equals 1 GB) from {{App1}} in {{Q1}}.
> - Capacity Scheduler assigns 2 containers (equals 1 GB) to {{App2}} in {{Q2}}.
> - _No more containers are ever preempted, even though {{Q2}} is far
> underserved_
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]