Github user Ngone51 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20604#discussion_r182086337
  
    --- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
    @@ -1643,7 +1646,10 @@ class SparkContext(config: SparkConf) extends 
Logging {
       def killExecutors(executorIds: Seq[String]): Boolean = {
         schedulerBackend match {
           case b: ExecutorAllocationClient =>
    -        b.killExecutors(executorIds, replace = false, force = 
true).nonEmpty
    +        require(executorAllocationManager.isEmpty,
    --- End diff --
    
    Hi, @squito , I'm quite questioned about the cases:
    >  If you've got just one executor, and then you kill it, should your app 
sit with 0 executors?
    
    if app sit with 0 executors, then pending tasks increase, which lead to 
`ExecutorAllocationManager` increases target number of executors. So, app will 
not always sit with 0 executors.
    
    > Or even if you've got 10 executors, and you kill one -- when is dynamic 
allocation allowed to bump the total back up?
    
    for this case, to be honest, I really do not get your point. But, it must 
blame my poor English.
    
    And, what will happens if we use this method without 
`ExecutorAllocationManager `? Or do we really need adjust TargetNumExecutors 
(set `adjustTargetNumExecutors  = true` below) if we are not using 
`ExecutorAllocationManager `?
    
    see these several lines in `killExecutors()`:
    ```
    if (adjustTargetNumExecutors) {
      requestedTotalExecutors = math.max(requestedTotalExecutors - 
executorsToKill.size, 0)
      ...
      doRequestTotalExecutors(requestedTotalExecutors)
    }
    ```
    Set `adjustTargetNumExecutors  = true` will change 
`requestedTotalExecutors` . And IIUC, `requestedTotalExecutors ` is only used 
in dynamic allocation mode. So, if we are not  using `ExecutorAllocationManager 
`, allocation client will request `requestedTotalExecutors = 0`  number of 
executors to cluster manager (this is really terrible). But, actually, app 
without `ExecutorAllocationManager ` do not have a limit requesting executors 
(in default).
    
    Actually, I think this series methods, including `killAndReplaceExecutor `, 
 `requestExecutors`, etc, are designed with dynamic allocation mode. And if we 
still want use these methods while app do not use `ExecutorAllocationManager`, 
we should not change `requestedTotalExecutors `, or even not request cluster 
manager with a specific number.
    
    WDYT?
    



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to