Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/20604#discussion_r170383918
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationClient.scala ---
@@ -55,18 +55,18 @@ private[spark] trait ExecutorAllocationClient {
/**
* Request that the cluster manager kill the specified executors.
*
- * When asking the executor to be replaced, the executor loss is
considered a failure, and
- * killed tasks that are running on the executor will count towards the
failure limits. If no
- * replacement is being requested, then the tasks will not count towards
the limit.
- *
* @param executorIds identifiers of executors to kill
- * @param replace whether to replace the killed executors with new ones,
default false
+ * @param adjustTargetNumExecutors whether the target number of
executors will be adjusted down
+ * after these executors have been killed
+ * @param countFailures if there are tasks running on the executors when
they are killed, whether
--- End diff --
whoops, I was supposed to set `countFailures = true` in
`sc.killAndReplaceExecutors`, thanks for catching that.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]