mridulm commented on pull request #34536:
URL: https://github.com/apache/spark/pull/34536#issuecomment-964600485


   For any mutation of the `executorDataMap`, we always fire a corresponding 
event currently - no ?
   That is, scheduler state itself is always updated via 
`SparkListenerExecutorAdded`/`SparkListenerExecutorRemoved`.
   So why would we end up in a situation where we need 
`SparkListenerExecutorRemoved` to be fired again ?
   
   What I mean is, we have only two mutations of `executorDataMap`, namely:
   * In `RegisterExecutor` 
[here](https://github.com/apache/spark/blob/ebe7bb62176ac3c29b0c238e411a0dc989371c33/core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala#L266)
 and
   * In `removeExecutor` 
[here](https://github.com/apache/spark/blob/ebe7bb62176ac3c29b0c238e411a0dc989371c33/core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala#L417)
   
   In both of these cases, there is a companion event which is fired - so why 
do we need to fire an event if executor is missing ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to