Github user Ngone51 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20604#discussion_r185159109
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -1643,7 +1646,10 @@ class SparkContext(config: SparkConf) extends
Logging {
def killExecutors(executorIds: Seq[String]): Boolean = {
schedulerBackend match {
case b: ExecutorAllocationClient =>
- b.killExecutors(executorIds, replace = false, force =
true).nonEmpty
+ require(executorAllocationManager.isEmpty,
--- End diff --
Hi @squito , thanks for your reply.
> but only *when* pending tasks increase.
`ExecutorAllocationManager ` will check pending (or backlog) tasks
periodically. So, we do not have to wait for *increment* actually.
And for `Dynamic Allocation` & `User` case, yeah, that's hard to define.
And I checked `SchedulerBackendUtils.getInitialTargetExecutorNumbe`, it set
`DEFAULT_NUMBER_EXECUTORS` = 2. But, this is not consistent with `Master`,
which set `executorLimit` to `Int.MaxValue` if we are not under dynamic
allocation mode. Maybe we can just init `requestedTotalExecutors ` with
`Int.MaxValue`(only when we are not under dynamic allocation mode).
Or, we do not call `doRequestTotalExecutors` if we call `requestExecutors`
or `killExecutors`, except `requestTotalExecutors`(only when we are not under
dynamic allocation mode).
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]