Github user foxish commented on a diff in the pull request:
https://github.com/apache/spark/pull/21366#discussion_r191865387
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
---
@@ -154,6 +154,24 @@ private[spark] object Config extends Logging {
.checkValue(interval => interval > 0, s"Logging interval must be a
positive time value.")
.createWithDefaultString("1s")
+ val KUBERNETES_EXECUTOR_API_POLLING_INTERVAL =
+ ConfigBuilder("spark.kubernetes.executor.apiPollingInterval")
+ .doc("Interval between polls against the Kubernetes API server to
inspect the " +
+ "state of executors.")
+ .timeConf(TimeUnit.MILLISECONDS)
+ .checkValue(interval => interval > 0, s"API server polling interval
must be a" +
+ " positive time value.")
+ .createWithDefaultString("30s")
+
+ val KUBERNETES_EXECUTOR_EVENT_PROCESSING_INTERVAL =
--- End diff --
I think this option is hard to reason about and relies on understanding an
implementation detail (the event queue). Why not just pick a default and leave
it at that? What scenario do we see for the user to try and choose this value?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]