[
https://issues.apache.org/jira/browse/YARN-10613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Eric Payne updated YARN-10613:
------------------------------
Summary: Config to allow Intra- and Inter-queue preemption to
enable/disable conservativeDRF (was: Config to allow Intra-queue preemption to
enable/disable conservativeDRF)
> Config to allow Intra- and Inter-queue preemption to enable/disable
> conservativeDRF
> ------------------------------------------------------------------------------------
>
> Key: YARN-10613
> URL: https://issues.apache.org/jira/browse/YARN-10613
> Project: Hadoop YARN
> Issue Type: Improvement
> Components: capacity scheduler, scheduler preemption
> Affects Versions: 3.3.0, 3.2.2, 3.1.4, 2.10.1
> Reporter: Eric Payne
> Assignee: Eric Payne
> Priority: Minor
>
> YARN-8292 added code that prevents CS intra-queue preemption from preempting
> containers from an app unless all of the major resources used by the app are
> greater than the user limit for that user.
> Ex:
> | Used | User Limit |
> | <58GB, 58> | <30GB, 300> |
> In this example, only used memory is above the user limit, not used vcores.
> So, intra-queue preemption will not occur.
> YARN-8292 added the {{conservativeDRF}} flag to
> {{CapacitySchedulerPreemptionUtils#tryPreemptContainerAndDeductResToObtain}}.
> If {{conservativeDRF}} is false, containers will be preempted from apps in
> the example state. If true, containers will not be preempted.
> This flag is hard-coded to false for Inter-queue (cross-queue) preemption and
> true for intra-queue (in-queue) preemption.
> I propose that in some cases, we want intra-queue preemption to be more
> aggressive and preempt in the example case. To accommodate that, I propose
> the addition of the following config property:
> {code:xml}
> <property>
>
> <name>yarn.resourcemanager.monitor.capacity.preemption.intra-queue-preemption.conservative-drf</name>
> <value>true</value>
> </property>
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]