Github user liyinan926 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21366#discussion_r190958844
  
    --- Diff: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterManager.scala
 ---
    @@ -56,17 +56,43 @@ private[spark] class KubernetesClusterManager extends 
ExternalClusterManager wit
           Some(new File(Config.KUBERNETES_SERVICE_ACCOUNT_TOKEN_PATH)),
           Some(new File(Config.KUBERNETES_SERVICE_ACCOUNT_CA_CRT_PATH)))
     
    -    val allocatorExecutor = ThreadUtils
    -      .newDaemonSingleThreadScheduledExecutor("kubernetes-pod-allocator")
         val requestExecutorsService = ThreadUtils.newDaemonCachedThreadPool(
           "kubernetes-executor-requests")
    +
    +    val bufferEventsExecutor = ThreadUtils
    +      
.newDaemonSingleThreadScheduledExecutor("kubernetes-executor-pods-event-buffer")
    +    val executeEventSubscribersExecutor = ThreadUtils
    +      .newDaemonCachedThreadPool("kubernetes-executor-pods-event-handlers")
    +    val eventQueue = new ExecutorPodsEventQueueImpl(
    +      bufferEventsExecutor, executeEventSubscribersExecutor)
    --- End diff --
    
    Given that we now have several executors to shutdown, can we shutdown them 
all in as few places as possible, e.g., in 
`KubernetesClusterSchedulerBackend.stop`? For example, you can shutdown 
`requestExecutorsService` and `eventsPollingExecutor` in 
`KubernetesClusterSchedulerBackend.stop`. Shutdown of `bufferEventsExecutor` 
and `executeEventSubscribersExecutor` probably need to handled by 
`ExecutorPodsEventQueueImpl` in its `stopProcessingEvents` method. Ideally they 
should be shutdown in the same place where they are created, but it's not 
doable in this case because `KubernetesClusterManager` doesn't have a stop 
method.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to