Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/21150#discussion_r227028091
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -728,6 +729,28 @@ private[spark] class MesosClusterScheduler(
state == MesosTaskState.TASK_LOST
}
+ /**
+ * Check if the driver has exceed the number of retries.
+ * When "spark.mesos.driver.supervise.maxRetries" is not set,
+ * the default behavior is to retry indefinitely
+ *
+ * @param retryState Retry state of the driver
+ * @param conf Spark Context to check if it contains
"spark.mesos.driver.supervise.maxRetries"
+ * @return true if driver has reached retry limit
+ * false if driver can be retried
+ */
+ private[scheduler] def hasDriverExceededRetries(retryState:
Option[MesosClusterRetryState],
--- End diff --
Please fix the param style:
hasDriverExceededRetries(
retryState: Option[MesosClusterRetryState],
conf.....)
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]