Github user vanzin commented on the issue:

    https://github.com/apache/spark/pull/15109
  
    @cenyuhai I don't think this is the right fix. You're just undoing part of 
what `expireDeadHosts()` does - which means that if you get into this situation 
you'll just leak entries in the `executorLastSeen` map. `expireDeadHosts()` 
will kill the executor asynchronously, so it's possible that a heartbeat will 
arrive and trigger this code path before the executor goes away.
    
    Perhaps instead the response in this cause should not ask the executor to 
re-register its block manager. But this is definitely not the right fix.
    
    We should close this PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to