agrawaldevesh commented on a change in pull request #29014:
URL: https://github.com/apache/spark/pull/29014#discussion_r459859455



##########
File path: 
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala
##########
@@ -939,12 +941,40 @@ private[spark] class TaskSchedulerImpl(
 
   override def executorDecommission(
       executorId: String, decommissionInfo: ExecutorDecommissionInfo): Unit = {
+    synchronized {
+      // The scheduler can get multiple decommission updates from multiple 
sources,
+      // and some of those can have isHostDecommissioned false. We merge them 
such that
+      // if we heard isHostDecommissioned ever true, then we keep that one 
since it is
+      // most likely coming from the cluster manager and thus authoritative
+      val oldDecomInfo = executorsPendingDecommission.get(executorId)
+      if (oldDecomInfo.isEmpty || !oldDecomInfo.get.isHostDecommissioned) {
+        executorsPendingDecommission(executorId) = decommissionInfo
+      }
+    }
     rootPool.executorDecommission(executorId)
     backend.reviveOffers()
   }
 
-  override def executorLost(executorId: String, reason: ExecutorLossReason): 
Unit = {
+  override def getExecutorDecommissionInfo(executorId: String)
+    : Option[ExecutorDecommissionInfo] = synchronized {
+      executorsPendingDecommission.get(executorId)
+  }
+
+  override def executorLost(executorId: String, givenReason: 
ExecutorLossReason): Unit = {
     var failedExecutor: Option[String] = None
+    val reason = givenReason match {
+      // Handle executor process loss due to decommissioning
+      case ExecutorProcessLost(message, workerLost, causedByApp) =>
+        val executorDecommissionInfo = getExecutorDecommissionInfo(executorId)
+        ExecutorProcessLost(
+          message,
+          // Also mark the worker lost if we know that the host was 
decommissioned
+          workerLost || 
executorDecommissionInfo.exists(_.isHostDecommissioned),

Review comment:
       Indeed that can happen: 
   
   First is when a regular executor loss happens without a decommissioning. In 
that case, we don't want this logic to kick in.
   
   Second is a race such that a decommissioning happens and somehow the 
`DecommissionExecutor` is delayed and it is only processed after the executor 
looses heartbeat.
   
   In any case, this whole stuff is kind of best effort: We cannot avoid like 
_all_ of the races but as long as we prevent job failures in the common case of 
decommissioning, we are okay.
   
   But I totally can see your concern about the executorsPendingDecommission 
growing and the executors not being removed from it. I have fixed the code to 
handle that situation by not adding anything to executorsPendingDecommission if 
the removeExecutor has been called. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to