[
https://issues.apache.org/jira/browse/YARN-10724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17311649#comment-17311649
]
Anup Agarwal edited comment on YARN-10724 at 3/30/21, 5:13 PM:
---------------------------------------------------------------
I have added a unit test that triggers the overcounting issue along with a fix
[^YARN-10724-trunk.001.patch].
The fix also updates FairScheduler to log other preemption metrics including
preemptedMemorySeconds and preemptedVcoreSeconds.
was (Author: 108anup):
I have added a unit test that triggers the overcounting issue along with a fix
[^YARN-10724-trunk.001.patch].
The fix also updates FairScheduler to log preemptedMemorySeconds and
preemptedVcoreSeconds.
> Overcounting of preemptions in CapacityScheduler (LeafQueue metrics)
> --------------------------------------------------------------------
>
> Key: YARN-10724
> URL: https://issues.apache.org/jira/browse/YARN-10724
> Project: Hadoop YARN
> Issue Type: Bug
> Reporter: Anup Agarwal
> Assignee: Anup Agarwal
> Priority: Minor
> Attachments: YARN-10724-trunk.001.patch
>
>
> Currently CapacityScheduler over-counts preemption metrics inside
> QueueMetrics.
>
> One cause of the over-counting:
> When a container is already running, SchedulerNode does not remove the
> container immediately from launchedContainer list and waits from the NM to
> kill the container.
> Both NODE_RESOURCE_UPDATE and NODE_UPDATE invoke
> signalContainersIfOvercommited (AbstractYarnScheduler) which look for
> containers to preempt based on the launchedContainers list. Both these calls
> can create a ContainerPreemptEvent for the same container (as RM is waiting
> for NM to kill the container). This leads LeafQueue to log metrics for the
> same preemption multiple times.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]