Github user harishreedharan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/6508#discussion_r31579551
  
    --- Diff: 
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
    @@ -443,9 +466,27 @@ private[spark] class ExecutorAllocationManager(
       private def onExecutorIdle(executorId: String): Unit = synchronized {
         if (executorIds.contains(executorId)) {
           if (!removeTimes.contains(executorId) && 
!executorsPendingToRemove.contains(executorId)) {
    +
    +        val hasCachedBlocks =
    +          executorsWithCachedBlocks.contains(executorId) ||
    +            
executorEndpoints.get(executorId).exists(_.askWithRetry[Boolean](HasCachedBlocks))
    +
    +        if (hasCachedBlocks) executorsWithCachedBlocks += executorId
    --- End diff --
    
    OK, I tried making the whole method async, but that breaks a bunch of tests 
which makes me suspect that it will break some real behavior too. That brings 
me to the question that @lianhuiwang  asked above. Can we rely on the master's 
state? When is the master out of touch with the executor?
    
    @andrewor14 - What do you think about avoiding the RPC and just replying on 
the state in the BMM?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to