[
https://issues.apache.org/jira/browse/YARN-7051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16134289#comment-16134289
]
Sunil G commented on YARN-7051:
-------------------------------
Thanks [~eepayne]
I just checked code base, and I think there is one more place we used
getApplications w/o any lock.
In {{computeAppsIdealAllocation}},
we are passing {{apps}} to {{createTempAppForResCalculation}} which internally
is doing a loop
{code}
// 2. tq.leafQueue will not be null as we validated it in caller side
Collection<FiCaSchedulerApp> apps = tq.leafQueue.getAllApplications();
// We do not need preemption for a single app
if (apps.size() == 1) {
return;
}
// 3. Create all tempApps for internal calculation and return a list from
// high priority to low priority order.
PriorityQueue<TempAppPerPartition> orderedByPriority =
createTempAppForResCalculation(
tq, apps, clusterResource, perUserAMUsed);
{code}
> FifoIntraQueuePreemptionPlugin can get concurrent modification exception
> ------------------------------------------------------------------------
>
> Key: YARN-7051
> URL: https://issues.apache.org/jira/browse/YARN-7051
> Project: Hadoop YARN
> Issue Type: Bug
> Components: capacity scheduler, scheduler preemption, yarn
> Affects Versions: 2.9.0, 2.8.1, 3.0.0-alpha3
> Reporter: Eric Payne
> Assignee: Eric Payne
> Priority: Critical
> Attachments: YARN-7051.001.patch
>
>
> {{FifoIntraQueuePreemptionPlugin#calculateUsedAMResourcesPerQueue}} has the
> following code:
> {code}
> Collection<FiCaSchedulerApp> runningApps = leafQueue.getApplications();
> Resource amUsed = Resources.createResource(0, 0);
> for (FiCaSchedulerApp app : runningApps) {
> {code}
> {{runningApps}} is unmodifiable but not concurrent. This caused the
> preemption monitor thread to crash in the RM in one of our clusters.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]