vanzin commented on a change in pull request #25236: [SPARK-28487][k8s] More
responsive dynamic allocation with K8S.
URL: https://github.com/apache/spark/pull/25236#discussion_r310732624
##########
File path:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterSchedulerBackend.scala
##########
@@ -134,14 +129,37 @@ private[spark] class KubernetesClusterSchedulerBackend(
super.getExecutorIds()
}
- override def doKillExecutors(executorIds: Seq[String]): Future[Boolean] =
Future[Boolean] {
- kubernetesClient
- .pods()
- .withLabel(SPARK_APP_ID_LABEL, applicationId())
- .withLabel(SPARK_ROLE_LABEL, SPARK_POD_EXECUTOR_ROLE)
- .withLabelIn(SPARK_EXECUTOR_ID_LABEL, executorIds: _*)
- .delete()
- // Don't do anything else - let event handling from the Kubernetes API do
the Spark changes
+ override def doKillExecutors(executorIds: Seq[String]): Future[Boolean] = {
+ executorIds.foreach { id =>
+ removeExecutor(id, ExecutorKilled)
+ }
+
+ // Give some time for the executors to shut themselves down, then
forcefully kill any
+ // remaining ones. This intentionally ignores the configuration about
whether pods
+ // should be deleted; only executors that shut down gracefully (and are
then collected
+ // by the ExecutorPodsLifecycleManager) will respect that configuration.
+ val killTask = new Runnable() {
+ override def run(): Unit = Utils.tryLogNonFatalError {
+ val running = kubernetesClient
+ .pods()
+ .withField("status.phase", "Running")
+ .withLabel(SPARK_APP_ID_LABEL, applicationId())
+ .withLabel(SPARK_ROLE_LABEL, SPARK_POD_EXECUTOR_ROLE)
+ .withLabelIn(SPARK_EXECUTOR_ID_LABEL, executorIds: _*)
+
+ if (!running.list().getItems().isEmpty()) {
+ logInfo(s"Forcefully deleting ${running.list().getItems().size()}
pods " +
+ s"(out of ${executorIds.size}) that are still running after
graceful shutdown period.")
+ running.delete()
+ }
+ }
+ }
+ executorService.schedule(killTask,
conf.get(KUBERNETES_DYN_ALLOC_KILL_GRACE_PERIOD),
+ TimeUnit.MILLISECONDS)
+
+ // Return an immediate success, since we can't confirm or deny that
executors have been
+ // actually shut down without waiting too long and blocking the allocation
thread.
+ Future.successful(true)
Review comment:
I added a longer comment explaining this.
The gist is:
- it's bad to wait because it blocks the EAM thread (in this case for a
really long time)
- it's ok to return "true" because these executors will all die eventually,
whether because of the shutdown message or because of the explicit kill.
The return value, to the best of my understanding, is not meant to say "yes
all executors have been killed", but rather "an attempt has been made to remove
all of these executors, and they'll die eventually".
(Otherwise there would be no need for the EAM to track which executors are
pending removal, since it would know immediately from this return value.)
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]