Github user liyinan926 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19468#discussion_r153359582
  
    --- Diff: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
 ---
    @@ -111,5 +111,14 @@ private[spark] object Config extends Logging {
           .stringConf
           .createOptional
     
    +  val KUBERNETES_EXECUTOR_LOST_REASON_CHECK_MAX_ATTEMPTS =
    +    ConfigBuilder("spark.kubernetes.executor.lostCheck.maxAttempts")
    +      .doc("Maximum number of attempts allowed for checking the reason of 
an executor loss " +
    +        "before it is assumed that the executor failed.")
    +      .intConf
    +      .checkValue(value => value > 0, "Maximum attempts of checks of 
executor lost reason " +
    +        "must be a positive integer")
    +      .createWithDefault(5)
    --- End diff --
    
    Yes, but I think 5 is a more sensible default than 10. @mccheah @foxish 
WDYT?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to