pan3793 commented on PR #43863:
URL: https://github.com/apache/spark/pull/43863#issuecomment-1815730673
> What is the default value - I would have expected to see one about yarn
and one about k8s?
The default value of `spark.executor.maxNumFailures` is dynamically
calculated in the runtime.
```
// Default to twice the number of executors (twice the maximum number of
executors if dynamic
// allocation is enabled), with a minimum of 3.
def maxNumExecutorFailures(sparkConf: SparkConf): Int = {
val effectiveNumExecutors =
if (Utils.isStreamingDynamicAllocationEnabled(sparkConf)) {
sparkConf.get(STREAMING_DYN_ALLOCATION_MAX_EXECUTORS)
} else if (Utils.isDynamicAllocationEnabled(sparkConf)) {
sparkConf.get(DYN_ALLOCATION_MAX_EXECUTORS)
} else {
sparkConf.get(EXECUTOR_INSTANCES).getOrElse(0)
}
// By default, effectiveNumExecutors is Int.MaxValue if dynamic
allocation is enabled. We need
// avoid the integer overflow here.
val defaultMaxNumExecutorFailures = math.max(3,
if (effectiveNumExecutors > Int.MaxValue / 2) Int.MaxValue else 2 *
effectiveNumExecutors)
sparkConf.get(MAX_EXECUTOR_FAILURES).getOrElse(defaultMaxNumExecutorFailures)
}
```
The default value of `spark.executor.failuresValidityInterval` is -1
```
private val executorFailuresValidityInterval =
sparkConf.get(config.EXECUTOR_ATTEMPT_FAILURE_VALIDITY_INTERVAL_MS).getOrElse(-1L)
```
The code uses `createOptional` to match the previous behavior, but I'm open
to change it if the reviewer thinks necessary.
```
private[spark] val MAX_EXECUTOR_FAILURES =
ConfigBuilder("spark.yarn.max.executor.failures")
.version("1.0.0")
.intConf
.createOptional
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]