Github user jcuquemelle commented on a diff in the pull request:
https://github.com/apache/spark/pull/19881#discussion_r177444324
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -116,9 +120,12 @@ private[spark] class ExecutorAllocationManager(
// TODO: The default value of 1 for spark.executor.cores works right now
because dynamic
// allocation is only supported for YARN and the default number of cores
per executor in YARN is
// 1, but it might need to be attained differently for different cluster
managers
- private val tasksPerExecutor =
+ private val tasksPerExecutorForFullParallelism =
--- End diff --
This is not exposed, it is merely a more precise description of the actual
computation. I just wanted to state more clearly that the existing default
behavior is maximizing the parallelism
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]