Github user squito commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20604#discussion_r169437530
  
    --- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
    @@ -1643,7 +1646,10 @@ class SparkContext(config: SparkConf) extends 
Logging {
       def killExecutors(executorIds: Seq[String]): Boolean = {
         schedulerBackend match {
           case b: ExecutorAllocationClient =>
    -        b.killExecutors(executorIds, replace = false, force = 
true).nonEmpty
    +        require(executorAllocationManager.isEmpty,
    --- End diff --
    
    What would calling this mean with dynamic allocation on?  Note this api 
explicitly says its meant to adjust resource usage downwards.  If you've got 
just one executor, and then you kill it, should your app sit with 0 executors?  
Or even if you've got 10 executors, and you kill one -- when is dynamic 
allocation allowed to bump the total back up?  I can't think of useful clear 
semantics for this
    
    (though this is not necessary to fix the bug, I could pull this out and 
move to a discussion in a new jira)


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to