Github user GraceH commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7888#discussion_r44375390
  
    --- Diff: 
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
    @@ -509,6 +511,13 @@ private[spark] class ExecutorAllocationManager(
       private def onExecutorBusy(executorId: String): Unit = synchronized {
         logDebug(s"Clearing idle timer for $executorId because it is now 
running a task")
         removeTimes.remove(executorId)
    +
    +    // Executor is added to remove by misjudgment due to async listener 
making it as idle).
    +    // see SPARK-9552
    +    if (executorsPendingToRemove.contains(executorId)) {
    --- End diff --
    
    @vanzin, In the original design, I changed the return back value for that 
function (`killExecutors`).  Not only for it is the last round of review 
comments. But also since It is still a little bit strange. For example, you 
have 3 executors to kill with force=false. And you find one of them is busy. It 
is hard to tell killing success or not directly. But if we only support single 
executor here, it is much simple and straightforward. 
    Besides, this is changed according to last round of review comments. Since 
the killExecutors only returns with the acknowledge (in documentation), which 
doesn't indicate the status of kill action.  Please let me know your further 
thoughts.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to