[
https://issues.apache.org/jira/browse/YARN-8379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16532643#comment-16532643
]
XiandongQI commented on YARN-8379:
----------------------------------
Thanks [~Zian Chen], [~leftnoteasy], [~sunilg].
I am unclear about the "definition" of "used-capacity". In version 3.1.0, when
calculating "used-capacity", it only considers the number of the total occupied
containers in of each LeafQueue and does not consider the relative percentages
of occupied containers on each "partition/NodewithLables", right?
The comment ("For two queues with the same priority: - ") in the source code is
not clear enough.
> Improve balancing resources in already satisfied queues by using Capacity
> Scheduler preemption
> ----------------------------------------------------------------------------------------------
>
> Key: YARN-8379
> URL: https://issues.apache.org/jira/browse/YARN-8379
> Project: Hadoop YARN
> Issue Type: Bug
> Reporter: Wangda Tan
> Assignee: Zian Chen
> Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8379.001.patch, YARN-8379.002.patch,
> YARN-8379.003.patch, YARN-8379.004.patch, YARN-8379.005.patch,
> YARN-8379.006.patch, ericpayne.confs.tgz
>
>
> Existing capacity scheduler only supports preemption for an underutilized
> queue to reach its guaranteed resource. In addition to that, there’s an
> requirement to get better balance between queues when all of them reach
> guaranteed resource but with different fairness resource.
> An example is, 3 queues with capacity, queue_a = 30%, queue_b = 30%, queue_c
> = 40%. At time T. queue_a is using 30%, queue_b is using 70%. Existing
> scheduler preemption won't happen. But this is unfair to queue_a since
> queue_a has the same guaranteed resources.
> Before YARN-5864, capacity scheduler do additional preemption to balance
> queues. We changed the logic since it could preempt too many containers
> between queues when all queues are satisfied.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]