squito commented on a change in pull request #25236: [SPARK-28487][k8s] More 
responsive dynamic allocation with K8S.
URL: https://github.com/apache/spark/pull/25236#discussion_r309745867
 
 

 ##########
 File path: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterSchedulerBackend.scala
 ##########
 @@ -134,14 +129,37 @@ private[spark] class KubernetesClusterSchedulerBackend(
     super.getExecutorIds()
   }
 
-  override def doKillExecutors(executorIds: Seq[String]): Future[Boolean] = 
Future[Boolean] {
-    kubernetesClient
-      .pods()
-      .withLabel(SPARK_APP_ID_LABEL, applicationId())
-      .withLabel(SPARK_ROLE_LABEL, SPARK_POD_EXECUTOR_ROLE)
-      .withLabelIn(SPARK_EXECUTOR_ID_LABEL, executorIds: _*)
-      .delete()
-    // Don't do anything else - let event handling from the Kubernetes API do 
the Spark changes
+  override def doKillExecutors(executorIds: Seq[String]): Future[Boolean] = {
+    executorIds.foreach { id =>
+      removeExecutor(id, ExecutorKilled)
+    }
+
+    // Give some time for the executors to shut themselves down, then 
forcefully kill any
+    // remaining ones. This intentionally ignores the configuration about 
whether pods
+    // should be deleted; only executors that shut down gracefully (and are 
then collected
+    // by the ExecutorPodsLifecycleManager) will respect that configuration.
+    val killTask = new Runnable() {
+      override def run(): Unit = Utils.tryLogNonFatalError {
+        val running = kubernetesClient
+          .pods()
+          .withField("status.phase", "Running")
+          .withLabel(SPARK_APP_ID_LABEL, applicationId())
+          .withLabel(SPARK_ROLE_LABEL, SPARK_POD_EXECUTOR_ROLE)
+          .withLabelIn(SPARK_EXECUTOR_ID_LABEL, executorIds: _*)
+
+        if (!running.list().getItems().isEmpty()) {
+          logInfo(s"Forcefully deleting ${running.list().getItems().size()} 
pods " +
+            s"(out of ${executorIds.size}) that are still running after 
graceful shutdown period.")
+          running.delete()
+        }
+      }
+    }
+    executorService.schedule(killTask, 
conf.get(KUBERNETES_DYN_ALLOC_KILL_GRACE_PERIOD),
+      TimeUnit.MILLISECONDS)
+
+    // Return an immediate success, since we can't confirm or deny that 
executors have been
+    // actually shut down without waiting too long and blocking the allocation 
thread.
+    Future.successful(true)
 
 Review comment:
   this seems bad.  If we get the response wrong, then the 
ExecutorAllocationManager will mistakenly update its internal state to think 
the executors have been removed, when they haven't been:
   
   
https://github.com/apache/spark/blob/b29829e2abdebdf6fa9dd0a4a4cf4c9d676ee82d/core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala#L448-L455
   
   which means we're expecting that call to kubernetes to delete the pods to be 
foolproof.
   
   Why is it so bad to wait here?  Is it because we are holding locks when 
making this call in CoarseGrainedSchedulerBackend?  could that be avoided?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to