Github user tgravescs commented on the issue:
    No we don't strictly need it in the name, the reasoning behind it was to 
indicate that this was a divisor based on if you have fully allocated executors 
for all the tasks and were running full parallelism. 
    Are you suggesting just use 
spark.dynamicAllocation.executorAllocationDivisor?  other ones thrown are were 
like maxExecutorAllocationDivisor.  One thing we were trying to keep from doing 
is confusing it with the maxExecutors config as well.  Opinions?


To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to