Github user GraceH commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7888#discussion_r44497839
  
    --- Diff: 
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
    @@ -509,6 +511,13 @@ private[spark] class ExecutorAllocationManager(
       private def onExecutorBusy(executorId: String): Unit = synchronized {
         logDebug(s"Clearing idle timer for $executorId because it is now 
running a task")
         removeTimes.remove(executorId)
    +
    +    // Executor is added to remove by misjudgment due to async listener 
making it as idle).
    +    // see SPARK-9552
    +    if (executorsPendingToRemove.contains(executorId)) {
    --- End diff --
    
    Yes. I know that. From the API design and implementation (named as 
`killExecutors`),  I'd prefer more general case. In case someone else calls 
that in the future. Besides, batch kill is better than kill them one by one 
each time. If it is ok not to take that account, I will handle that according 
to existing case. 
    
    Thanks for your comments and suggestions. I will change the code 
accordingly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to