Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/20604#discussion_r169452326
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -1643,7 +1646,10 @@ class SparkContext(config: SparkConf) extends
Logging {
def killExecutors(executorIds: Seq[String]): Boolean = {
schedulerBackend match {
case b: ExecutorAllocationClient =>
- b.killExecutors(executorIds, replace = false, force =
true).nonEmpty
+ require(executorAllocationManager.isEmpty,
--- End diff --
I'm not sure why you'd use this with dynamic allocation, but it's been
possible in the past. It's probably ok to change this though.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]