Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/19881#discussion_r166655479
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -114,9 +114,13 @@ private[spark] class ExecutorAllocationManager(
// TODO: The default value of 1 for spark.executor.cores works right now
because dynamic
// allocation is only supported for YARN and the default number of cores
per executor in YARN is
// 1, but it might need to be attained differently for different cluster
managers
- private val tasksPerExecutor =
+ private val taskSlotPerExecutor =
conf.getInt("spark.executor.cores", 1) /
conf.getInt("spark.task.cpus", 1)
+ private val tasksPerExecutorSlot =
conf.getInt("spark.dynamicAllocation.tasksPerExecutorSlot", 1)
+
+ private val tasksPerExecutor = tasksPerExecutorSlot * taskSlotPerExecutor
--- End diff --
Since we aren't using concept of slots, I think we should leave the
tasksPerExecutor alone and put this functionality into maxNumExecutorsNeeded()
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]