Github user jcuquemelle commented on a diff in the pull request:
https://github.com/apache/spark/pull/19881#discussion_r176365454
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -116,9 +120,12 @@ private[spark] class ExecutorAllocationManager(
// TODO: The default value of 1 for spark.executor.cores works right now
because dynamic
// allocation is only supported for YARN and the default number of cores
per executor in YARN is
// 1, but it might need to be attained differently for different cluster
managers
- private val tasksPerExecutor =
+ private val tasksPerExecutorForFullParallelism =
--- End diff --
it is used at 2 places, one to validate arguments and the other to actually
compute the target number of executors. If I remove this variable, I will need
to either store spark.executor.cores and spark.task.cpus instead, or to fetch
them each time we do a validation or a computation of target nbExecutors
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]