q2w commented on a change in pull request #32766:
URL: https://github.com/apache/spark/pull/32766#discussion_r647114128
##########
File path:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
##########
@@ -519,10 +558,7 @@ class CoarseGrainedSchedulerBackend(scheduler:
TaskSchedulerImpl, val rpcEnv: Rp
scheduler.sc.env.blockManager.master.decommissionBlockManagers(executorsToDecommission)
if (!triggeredByExecutor) {
- executorsToDecommission.foreach { executorId =>
- logInfo(s"Notify executor $executorId to decommissioning.")
- executorDataMap(executorId).executorEndpoint.send(DecommissionExecutor)
- }
Review comment:
@Ngone51 No, i haven't seen this in a public cloud. We have some
experience with this issue in private cloud which had bigger timeout for
forceful node removal and this was the motive of this PR to give some control
of decommissioning process to user.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]