[GitHub] [spark] yaooqinn commented on a change in pull request #25648: [SPARK-28947][K8S] Status logging not happens at an interval for liveness
yaooqinn commented on a change in pull request #25648: [SPARK-28947][K8S] Status logging not happens at an interval for liveness URL: https://github.com/apache/spark/pull/25648#discussion_r332818335 ## File path: resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/submit/ClientSuite.scala ## @@ -177,13 +176,13 @@ class ClientSuite extends SparkFunSuite with BeforeAndAfter { } test("Waiting for app completion should stall on the watcher") { +kconf.sparkConf.set(WAIT_FOR_APP_COMPLETION, true) Review comment: I see, thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] yaooqinn commented on a change in pull request #25648: [SPARK-28947][K8S] Status logging not happens at an interval for liveness
yaooqinn commented on a change in pull request #25648: [SPARK-28947][K8S] Status logging not happens at an interval for liveness URL: https://github.com/apache/spark/pull/25648#discussion_r329290356 ## File path: resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/KubernetesTestConf.scala ## @@ -35,7 +35,7 @@ object KubernetesTestConf { val RESOURCE_PREFIX = "prefix" val EXECUTOR_ID = "1" - private val DEFAULT_CONF = new SparkConf(false) + private val DEFAULT_CONF = new SparkConf(false).set(WAIT_FOR_APP_COMPLETION, false) Review comment: https://github.com/apache/spark/blob/7912ab85a6fc086b814d93e7c71af1f50515517a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala#L205 As the `spark.kubernetes.report.interval` is always positive, I guess if WAIT_FOR_APP_COMPLETION is true, the logging is inevitably on whether in a long or short interval. I don't think this is dangerous, because in fact this only controls the logic of LoggingPodStatusWatcher before or after this change. For unit tests, this will only take effect in `k8s.submit.ClientSuite`, if it feels a bit odd to you, I can set this test by test in that test class also cc @erikerlandson This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] yaooqinn commented on a change in pull request #25648: [SPARK-28947][K8S] Status logging not happens at an interval for liveness
yaooqinn commented on a change in pull request #25648: [SPARK-28947][K8S] Status logging not happens at an interval for liveness URL: https://github.com/apache/spark/pull/25648#discussion_r329288629 ## File path: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/LoggingPodStatusWatcher.scala ## @@ -22,43 +22,41 @@ import io.fabric8.kubernetes.api.model.Pod import io.fabric8.kubernetes.client.{KubernetesClientException, Watcher} import io.fabric8.kubernetes.client.Watcher.Action +import org.apache.spark.deploy.k8s.Config._ +import org.apache.spark.deploy.k8s.KubernetesDriverConf import org.apache.spark.deploy.k8s.KubernetesUtils._ import org.apache.spark.internal.Logging import org.apache.spark.util.ThreadUtils private[k8s] trait LoggingPodStatusWatcher extends Watcher[Pod] { - def awaitCompletion(): Unit + def watchOrStop(submissionId: String): Unit } /** * A monitor for the running Kubernetes pod of a Spark application. Status logging occurs on * every state change and also at an interval for liveness. * - * @param appId application ID. - * @param maybeLoggingInterval ms between each state request. If provided, must be a positive - * number. + * @param conf kubernetes driver conf. */ -private[k8s] class LoggingPodStatusWatcherImpl( -appId: String, -maybeLoggingInterval: Option[Long]) +private[k8s] class LoggingPodStatusWatcherImpl(conf: KubernetesDriverConf) extends LoggingPodStatusWatcher with Logging { + private val appId = conf.appId + private val podCompletedFuture = new CountDownLatch(1) + // start timer for periodic logging - private val scheduler = - ThreadUtils.newDaemonSingleThreadScheduledExecutor("logging-pod-status-watcher") - private val logRunnable: Runnable = () => logShortStatus() + private lazy val maybeLoggingService = if (conf.get(WAIT_FOR_APP_COMPLETION)) { Review comment: yes, there is no. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] yaooqinn commented on a change in pull request #25648: [SPARK-28947][K8S] Status logging not happens at an interval for liveness
yaooqinn commented on a change in pull request #25648: [SPARK-28947][K8S] Status logging not happens at an interval for liveness URL: https://github.com/apache/spark/pull/25648#discussion_r329288629 ## File path: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/LoggingPodStatusWatcher.scala ## @@ -22,43 +22,41 @@ import io.fabric8.kubernetes.api.model.Pod import io.fabric8.kubernetes.client.{KubernetesClientException, Watcher} import io.fabric8.kubernetes.client.Watcher.Action +import org.apache.spark.deploy.k8s.Config._ +import org.apache.spark.deploy.k8s.KubernetesDriverConf import org.apache.spark.deploy.k8s.KubernetesUtils._ import org.apache.spark.internal.Logging import org.apache.spark.util.ThreadUtils private[k8s] trait LoggingPodStatusWatcher extends Watcher[Pod] { - def awaitCompletion(): Unit + def watchOrStop(submissionId: String): Unit } /** * A monitor for the running Kubernetes pod of a Spark application. Status logging occurs on * every state change and also at an interval for liveness. * - * @param appId application ID. - * @param maybeLoggingInterval ms between each state request. If provided, must be a positive - * number. + * @param conf kubernetes driver conf. */ -private[k8s] class LoggingPodStatusWatcherImpl( -appId: String, -maybeLoggingInterval: Option[Long]) +private[k8s] class LoggingPodStatusWatcherImpl(conf: KubernetesDriverConf) extends LoggingPodStatusWatcher with Logging { + private val appId = conf.appId + private val podCompletedFuture = new CountDownLatch(1) + // start timer for periodic logging - private val scheduler = - ThreadUtils.newDaemonSingleThreadScheduledExecutor("logging-pod-status-watcher") - private val logRunnable: Runnable = () => logShortStatus() + private lazy val maybeLoggingService = if (conf.get(WAIT_FOR_APP_COMPLETION)) { Review comment: yes, there is no. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] yaooqinn commented on a change in pull request #25648: [SPARK-28947][K8S] Status logging not happens at an interval for liveness
yaooqinn commented on a change in pull request #25648: [SPARK-28947][K8S] Status logging not happens at an interval for liveness URL: https://github.com/apache/spark/pull/25648#discussion_r329288272 ## File path: resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/submit/ClientSuite.scala ## @@ -177,13 +176,13 @@ class ClientSuite extends SparkFunSuite with BeforeAndAfter { } test("Waiting for app completion should stall on the watcher") { +kconf.sparkConf.set(WAIT_FOR_APP_COMPLETION, true) Review comment: I guess this is needed because it won't stall when false This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] [spark] yaooqinn commented on a change in pull request #25648: [SPARK-28947][K8S] Status logging not happens at an interval for liveness
yaooqinn commented on a change in pull request #25648: [SPARK-28947][K8S] Status logging not happens at an interval for liveness URL: https://github.com/apache/spark/pull/25648#discussion_r329288207 ## File path: resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala ## @@ -86,15 +86,11 @@ private[spark] object ClientArguments { * @param builder Responsible for building the base driver pod based on a composition of *implemented features. * @param kubernetesClient the client to talk to the Kubernetes API server - * @param waitForAppCompletion a flag indicating whether the client should wait for the application - * to complete - * @param watcher a watcher that monitors and logs the application status Review comment: yes, I will get it back This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org