Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/20604#discussion_r169421040
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -1643,7 +1646,10 @@ class SparkContext(config: SparkConf) extends
Logging {
def killExecutors(executorIds: Seq[String]): Boolean = {
schedulerBackend match {
case b: ExecutorAllocationClient =>
- b.killExecutors(executorIds, replace = false, force =
true).nonEmpty
+ require(executorAllocationManager.isEmpty,
--- End diff --
This is a developer api, so probably ok, but this is a change in behavior.
Is it just not possible to support this with dynamic allocation?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]