abellina commented on a change in pull request #24072: [SPARK-27112] : Spark 
Scheduler encounters two independent Deadlocks …
URL: https://github.com/apache/spark/pull/24072#discussion_r264906757
 
 

 ##########
 File path: 
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
 ##########
 @@ -631,8 +635,10 @@ class CoarseGrainedSchedulerBackend(scheduler: 
TaskSchedulerImpl, val rpcEnv: Rp
       force: Boolean): Seq[String] = {
     logInfo(s"Requesting to kill executor(s) ${executorIds.mkString(", ")}")
 
+    val idleExecutorIds = executorIds.filter { id => force || 
!scheduler.isExecutorBusy(id) }
 
 Review comment:
   @squito  If you wanted to prevent that then you need to put the `val 
idleExecutorIds = executorIds.filter { id => force || 
!scheduler.isExecutorBusy(id) }` back inside that scheduler lock, right?
   
   > it also isn't the worst thing in the world if we occasionally kill an 
executor which just got a task scheduled on it.
   
   So we don't count this as a task failure right? Not sure where to look to 
verify that.
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to