holdenk commented on a change in pull request #26440: [WIP][SPARK-20628][CORE] 
Start to improve Spark decommissioning & preemption support
URL: https://github.com/apache/spark/pull/26440#discussion_r345316610
 
 

 ##########
 File path: 
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala
 ##########
 @@ -137,6 +141,8 @@ private[spark] class CoarseGrainedExecutorBackend(
     case LaunchTask(data) =>
       if (executor == null) {
         exitExecutor(1, "Received LaunchTask command but executor was null")
+      } else if (decommissioned) {
+        logWarning("Asked to launch a task while decommissioned. Not 
launching.")
 
 Review comment:
   This is really good catch, we could totally have had a race condition here 
that would loose task until the executor exited.
   
   I can think of 3 ways we could approach solving this:
   
   1) Since we don't want to bounce all of the tasks on the executor when it's 
decommissioned or preempted I think the best thing to do here is to still allow 
scheduling tasks and leave it to the driver to take advantage of the 
decommissioning state information.
   2) When scheduling a task wait for an ack from the executor (this might slow 
things down when we have a lot of tasks to schedule)
   3) Add another message that the executor can send to the driver indicating 
that a specific task should be rescheduled (e.g. for when we've encountered 
this race condition specifically).
   
   I'm partial to solution #1 but I think #3 would be ok to.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to