Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7888#discussion_r44497967
  
    --- Diff: 
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
    @@ -509,6 +511,13 @@ private[spark] class ExecutorAllocationManager(
       private def onExecutorBusy(executorId: String): Unit = synchronized {
         logDebug(s"Clearing idle timer for $executorId because it is now 
running a task")
         removeTimes.remove(executorId)
    +
    +    // Executor is added to remove by misjudgment due to async listener 
making it as idle).
    +    // see SPARK-9552
    +    if (executorsPendingToRemove.contains(executorId)) {
    --- End diff --
    
    What I meant is that since this class doesn't call `killExecutors` with 
multiple IDs, your question about updating `executorsPendingToRemove` in that 
case does not apply.
    
    `killExecutors` should return true if at least one executor was killed; 
it's not optimal but it fits with the current API. But that doesn't affect 
`executorsPendingToRemove` at all.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to