Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/15249
@kayousterhout
Agree with (1) - permanent blacklist will effectively work the same way for
executor shutdown.
Re(2) - A task failure is not necessarily only due to resource restriction
or (1) : it could also be a byzantine failure, interaction (not necessarily
contention) with other tasks running on the executor/node, issues up/down the
stack (particularly MT-safety), external library issues, etc.
If it is recoverable, then a timeout + retry will alleviate it without
needing computation on a different executor/node.
If it is not recoverable (within reasonable time) then current logic of
permanent blacklist works.
Unfortunately, determining which is the problem. As @tgravescs mentioned, a
resource contention can be a long lived issue as well at times.
Ideally, if blacklist timeout is < scheduler delay, then retry can help -
if not, it depends on job characterstics (how many partitions, etc).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]