Github user mccheah commented on the pull request:
https://github.com/apache/spark/pull/8007#issuecomment-131915844
Got it - was hoping GetExecutorLossReason wouldn't be called more than
once. That is what's allowing me to clean up the map there.
In general I may have been overly paranoid in this PR to attempt to clean
up data structures by removing items from maps. I'm not sure what the
ramifications are of having the maps keep filling up however - presumably that
would cause problems for really long-running Spark Contexts. If anyone has any
better suggestions on cleanup logic, that would be appreciated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]