Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/5536#issuecomment-93665373
It would be good to come up with a test that can reproduce the issue. I
believe that it is actually supposed to be acceptable for numExecutorsPending
to sit below 0, in situations where we have more executors than we need. My
suspicion is that simply capping numExecutorsPending at 0 may turn out to be a
duct tape solution, and that we're slowly leaking downwards. My further
suspicion is that the right solution involves removing executorsPendingToRemove
from the targetNumExecutors calculation - when we have too many executors we're
probably double counting.
This is sort of handwavy, but I'm happy to look deeper if you'd like.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]