agrawaldevesh commented on a change in pull request #29722:
URL: https://github.com/apache/spark/pull/29722#discussion_r488069363



##########
File path: 
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala
##########
@@ -79,12 +79,23 @@ private[spark] class CoarseGrainedExecutorBackend(
    */
   private[executor] val taskResources = new mutable.HashMap[Long, Map[String, 
ResourceInformation]]
 
-  @volatile private var decommissioned = false
+  private var decommissioned = false
 
   override def onStart(): Unit = {
-    logInfo("Registering PWR handler.")
-    SignalUtils.register("PWR", "Failed to register SIGPWR handler - " +
-      "disabling decommission feature.")(decommissionSelf)
+    if (env.conf.get(DECOMMISSION_ENABLED)) {
+      logInfo("Registering PWR handler to trigger decommissioning.")
+      SignalUtils.register("PWR", "Failed to register SIGPWR handler - " +
+      "disabling executor decommission feature.") {
+        self.send(DecommissionExecutor)

Review comment:
       I think being async here is fine, but I would like to better understand 
the need for inlining the logic (removed from L273 below in here).
   
   Is ExecutorDecommissioning only supposed to be called when the executor 
receives a SIGPWR and NOT when it it is asked to decom by the driver (or in the 
standalone scenario) ? 
   
   Earlier it seemed that both the PWR signal handling and the other codepaths 
went through the same `decommissionSelf` code flow but now it they appear a bit 
different.
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to