Github user zhonghaihua commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10794#discussion_r53562669
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
 ---
    @@ -155,6 +158,9 @@ class CoarseGrainedSchedulerBackend(scheduler: 
TaskSchedulerImpl, val rpcEnv: Rp
               // in this block are read when requesting executors
               CoarseGrainedSchedulerBackend.this.synchronized {
                 executorDataMap.put(executorId, data)
    +            if (currentExecutorIdCounter < Integer.parseInt(executorId)) {
    +              currentExecutorIdCounter = Integer.parseInt(executorId)
    +            }
    --- End diff --
    
    Thank for review it. For my understanding, I don't think we can get the max 
executor ID in executorDataMap. Because, when AM is failure, all the executor 
are disconnect and be removed, by this time, as the code in method 
`CoarseGrainedSchedulerBackend.removeExecutor` show, the executor information 
in executorDataMap also be removed. 
    So, I think the executor information in executorDataMap is not complete. 
What do you think ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to