GitHub user redsanket opened a pull request:
https://github.com/apache/spark/pull/21475
[SPARK-24416] Fix configuration specification for killBlacklisted executors
## What changes were proposed in this pull request?
spark.blacklist.killBlacklistedExecutors is defined as
(Experimental) If set to "true", allow Spark to automatically kill, and
attempt to re-create, executors when they are blacklisted. Note that, when an
entire node is added to the blacklist, all of the executors on that node will
be killed.
I presume the killing of blacklisted executors only happens after the stage
completes successfully and all tasks have completed or on fetch failures
(updateBlacklistForFetchFailure/updateBlacklistForSuccessfulTaskSet). It is
confusing because the definition states that the executor will be attempted to
be recreated as soon as it is blacklisted. This is not true while the stage is
in progress and an executor is blacklisted, it will not attempt to cleanup
until the stage finishes.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/redsanket/spark SPARK-24416
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/21475.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #21475
----
commit f08f74a3a774f9e2768f7924c4438516a4106b7c
Author: Sanket Chintapalli <schintap@...>
Date: 2018-05-31T22:16:39Z
Fix configuration specification for killBlacklisted executors
----
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]