Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20604#discussion_r169420133
  
    --- Diff: 
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
    @@ -334,6 +336,10 @@ private[spark] class ExecutorAllocationManager(
     
           // If the new target has not changed, avoid sending a message to the 
cluster manager
           if (numExecutorsTarget < oldNumExecutorsTarget) {
    +        // We lower the target number of executors but don't actively kill 
any yet.  We do this
    --- End diff --
    
    I'm not sure I follow this comment.
    
    From my reading of it, it's saying that you don't want to kill executors 
because you don't want to immediately get a new one to replace it. But how can 
that happen, if you're also lowering the target number?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to