Github user mccheah commented on a diff in the pull request:
https://github.com/apache/spark/pull/21366#discussion_r190964855
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterManager.scala
---
@@ -56,17 +56,43 @@ private[spark] class KubernetesClusterManager extends
ExternalClusterManager wit
Some(new File(Config.KUBERNETES_SERVICE_ACCOUNT_TOKEN_PATH)),
Some(new File(Config.KUBERNETES_SERVICE_ACCOUNT_CA_CRT_PATH)))
- val allocatorExecutor = ThreadUtils
- .newDaemonSingleThreadScheduledExecutor("kubernetes-pod-allocator")
val requestExecutorsService = ThreadUtils.newDaemonCachedThreadPool(
"kubernetes-executor-requests")
+
+ val bufferEventsExecutor = ThreadUtils
+
.newDaemonSingleThreadScheduledExecutor("kubernetes-executor-pods-event-buffer")
+ val executeEventSubscribersExecutor = ThreadUtils
+ .newDaemonCachedThreadPool("kubernetes-executor-pods-event-handlers")
+ val eventQueue = new ExecutorPodsEventQueueImpl(
+ bufferEventsExecutor, executeEventSubscribersExecutor)
--- End diff --
One way to share this logic would be to have a shared trait for all the
things that have thread pools to stop, and then use the trait to stop
everything. e.g.
```
trait HasThreadPools {
protected def threadPools(): Iterable[Executor]
def stop(): Unit = threadPools().foreach(_.shutdownNow())
}
class MyThreadPoolsDependentClass {
override def threadPools(): Iterable[Executor] = { // All the executors I
want to shut down }
override def stop(): Unit = {
// Stop everything else
super.stop()
}
}
```
Not sure if it's worth the extra abstraction to do this though.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]