Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/19881#discussion_r174145101
--- Diff: docs/configuration.md ---
@@ -1795,6 +1796,19 @@ Apart from these, the following properties are also
available, and may be useful
Lower bound for the number of executors if dynamic allocation is
enabled.
</td>
</tr>
+<tr>
+ <td><code>spark.dynamicAllocation.fullParallelismDivisor</code></td>
--- End diff --
Naming configs is really hard and lots of different opinions on it and in
the end someone is going to be confused, I need to think about this some more.
I see the reason to use Parallelism here rather then maxExecutors
(maxExecutorsDivisor - could be confusing if people think it applies to the
maxExecutors config), but I also think parallelism would be confused with the
parallelism in the spark.default.parallelism, its not defining number of tasks
but number of executors to allocate based on the parallelism. Another one I
thought of is executorAllocationDivisor. I'll think about it some more and get
back.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]