Ngone51 commented on a change in pull request #29817:
URL: https://github.com/apache/spark/pull/29817#discussion_r509902290
##########
File path:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
##########
@@ -465,72 +464,50 @@ class CoarseGrainedSchedulerBackend(scheduler:
TaskSchedulerImpl, val rpcEnv: Rp
* @param executorsAndDecomInfo Identifiers of executors & decommission info.
* @param adjustTargetNumExecutors whether the target number of executors
will be adjusted down
* after these executors have been
decommissioned.
+ * @param triggeredByExecutor whether the decommission is triggered at
executor.
* @return the ids of the executors acknowledged by the cluster manager to
be removed.
*/
override def decommissionExecutors(
executorsAndDecomInfo: Array[(String, ExecutorDecommissionInfo)],
- adjustTargetNumExecutors: Boolean): Seq[String] = {
-
+ adjustTargetNumExecutors: Boolean,
+ triggeredByExecutor: Boolean): Seq[String] = withLock {
// Do not change this code without running the K8s integration suites
- val executorsToDecommission = executorsAndDecomInfo.filter { case
(executorId, decomInfo) =>
- CoarseGrainedSchedulerBackend.this.synchronized {
Review comment:
Because now we not only change the mutable status but also call
`scheduler.executorDecommission`. And `scheduler.executorDecommission` requires
the lock on `TaskSchedulerImpl`. To avoid potential deadlock, we should use
`withLock`.
More context: Previously, change the mutable status and call
`scheduler.executorDecommission` happens in two separate functions. In this PR,
we combined these two functions into one function. Therefore, those two
operations now are within the same code block and need the `withLock`
protection.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]