Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/8760#discussion_r43796970
--- Diff: docs/configuration.md ---
@@ -1141,6 +1141,48 @@ Apart from these, the following properties are also
available, and may be useful
</td>
</tr>
<tr>
+ <td><code>spark.scheduler.blacklist.enabled</code></td>
+ <td>true</td>
+ <td>
+ If set to ture, executor blacklist feature will be enabled to avoid
allocte new task on bad executor. The logic to define bad executor is based on
BlacklistStrategy which is also configurable.
+ </td>
+</tr>
+<tr>
+ <td><code>spark.scheduler.executorTaskBlacklistTime</code></td>
+ <td>0L</td>
+ <td>
+ The threshold to deside blacklist executor, if last failed time is
older than current time subtract executorTaskBlacklistTime, then the executor
could be removed from blacklist.
--- End diff --
sorry I'm going to backtrack against what I said earlier. Now I think it
probably is better if you go with your original name
"spark.scheduler.blacklist.timeout", so that all the options are
"spark.scheduler.blacklist.xxx". We should just still support
"spark.scheduler.executorTaskBlacklistTime", even if its undocumented".
Also, lets change the conf to include a timeunit like the others, so the
default would be "0s".
and I'd like to suggest rewording the actual text to: "If executor
blacklisting is enabled, this controls how long an executor remains in the
blacklist before it is returned to the pool of available executors."
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]