This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
     new 4ded0885968 [SPARK-44158][K8S] Remove unused 
`spark.kubernetes.executor.lostCheckmaxAttempts`
4ded0885968 is described below

commit 4ded08859681b3bcef353e1fd8068712734144b5
Author: Dongjoon Hyun <[email protected]>
AuthorDate: Fri Jun 23 13:49:24 2023 -0700

    [SPARK-44158][K8S] Remove unused 
`spark.kubernetes.executor.lostCheckmaxAttempts`
    
    ### What changes were proposed in this pull request?
    
    This PR aims to remove `spark.kubernetes.executor.lostCheckmaxAttempts` 
because it was not used after SPARK-24248 (Apache Spark 2.4.0)
    
    ### Why are the changes needed?
    
    To clean up this from documentation and code.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No because it was no-op already.
    
    ### How was this patch tested?
    
    Pass the CIs.
    
    Closes #41713 from dongjoon-hyun/SPARK-44158.
    
    Authored-by: Dongjoon Hyun <[email protected]>
    Signed-off-by: Dongjoon Hyun <[email protected]>
    (cherry picked from commit 6590e7db5212bb0dc90f22133a96e3d5e385af65)
    Signed-off-by: Dongjoon Hyun <[email protected]>
---
 docs/running-on-kubernetes.md                                  | 10 ----------
 .../src/main/scala/org/apache/spark/deploy/k8s/Config.scala    | 10 ----------
 2 files changed, 20 deletions(-)

diff --git a/docs/running-on-kubernetes.md b/docs/running-on-kubernetes.md
index 98c868e4c37..71754dbc6f9 100644
--- a/docs/running-on-kubernetes.md
+++ b/docs/running-on-kubernetes.md
@@ -961,16 +961,6 @@ See the [configuration page](configuration.html) for 
information on Spark config
   </td>
   <td>2.3.0</td>
 </tr>
-<tr>
-  <td><code>spark.kubernetes.executor.lostCheck.maxAttempts</code></td>
-  <td><code>10</code></td>
-  <td>
-    Number of times that the driver will try to ascertain the loss reason for 
a specific executor.
-    The loss reason is used to ascertain whether the executor failure is due 
to a framework or an application error
-    which in turn decides whether the executor is removed and replaced, or 
placed into a failed state for debugging.
-  </td>
-  <td>2.3.0</td>
-</tr>
 <tr>
   <td><code>spark.kubernetes.submission.waitAppCompletion</code></td>
   <td><code>true</code></td>
diff --git 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
index 042e9682730..0c54191fb10 100644
--- 
a/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
+++ 
b/resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
@@ -471,16 +471,6 @@ private[spark] object Config extends Logging {
       .checkValue(value => value > 0, "Allocation executor timeout must be a 
positive time value.")
       .createWithDefaultString("600s")
 
-  val KUBERNETES_EXECUTOR_LOST_REASON_CHECK_MAX_ATTEMPTS =
-    ConfigBuilder("spark.kubernetes.executor.lostCheck.maxAttempts")
-      .doc("Maximum number of attempts allowed for checking the reason of an 
executor loss " +
-        "before it is assumed that the executor failed.")
-      .version("2.3.0")
-      .intConf
-      .checkValue(value => value > 0, "Maximum attempts of checks of executor 
lost reason " +
-        "must be a positive integer")
-      .createWithDefault(10)
-
   val WAIT_FOR_APP_COMPLETION =
     ConfigBuilder("spark.kubernetes.submission.waitAppCompletion")
       .doc("In cluster mode, whether to wait for the application to finish 
before exiting the " +


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to