Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/20604#discussion_r169436456
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -334,6 +336,10 @@ private[spark] class ExecutorAllocationManager(
// If the new target has not changed, avoid sending a message to the
cluster manager
if (numExecutorsTarget < oldNumExecutorsTarget) {
+ // We lower the target number of executors but don't actively kill
any yet. We do this
--- End diff --
I was trying to answer a different question -- if we don't kill the
executor now, why even bother lowering the target number? as that would be an
alternative solution -- don't adjust the target number here at all, just wait
until you kill the executors for being idle. (and really I'm just guessing at
the logic.)
lemme try to reword this some ...
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]