This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 194aa18821c0 [SPARK-49868][DOC][FOLLOWUP] Update docs for executor 
failure tracking configrations
194aa18821c0 is described below

commit 194aa18821c04f068864cc4cf9e3124c54ae7c44
Author: Cheng Pan <[email protected]>
AuthorDate: Tue Jan 7 07:02:10 2025 -0800

    [SPARK-49868][DOC][FOLLOWUP] Update docs for executor failure tracking 
configrations
    
    ### What changes were proposed in this pull request?
    
    Previously, the executor failure tracking code was located at 
`ExecutorPodsAllocator`, which only takes effect when 
`spark.kubernetes.allocation.pods.allocator=direct`. 
https://github.com/apache/spark/pull/48344 moves the code to the 
`ExecutorPodsLifecycleManager`
    consequently removes this limitation.
    
    ### Why are the changes needed?
    
    Keep docs up-to-date with code.
    
    ### Does this PR introduce _any_ user-facing change?
    
    Yes, docs are updated.
    
    ### How was this patch tested?
    
    Review.
    
    ### Was this patch authored or co-authored using generative AI tooling?
    
    No.
    
    Closes #48358 from pan3793/SPARK-49868-followup.
    
    Authored-by: Cheng Pan <[email protected]>
    Signed-off-by: Dongjoon Hyun <[email protected]>
---
 core/src/main/scala/org/apache/spark/internal/config/package.scala | 7 +++----
 docs/configuration.md                                              | 6 ++----
 2 files changed, 5 insertions(+), 8 deletions(-)

diff --git a/core/src/main/scala/org/apache/spark/internal/config/package.scala 
b/core/src/main/scala/org/apache/spark/internal/config/package.scala
index 324ef701c426..6d51424f0baf 100644
--- a/core/src/main/scala/org/apache/spark/internal/config/package.scala
+++ b/core/src/main/scala/org/apache/spark/internal/config/package.scala
@@ -1023,8 +1023,7 @@ package object config {
   private[spark] val MAX_EXECUTOR_FAILURES =
     ConfigBuilder("spark.executor.maxNumFailures")
       .doc("The maximum number of executor failures before failing the 
application. " +
-        "This configuration only takes effect on YARN, or Kubernetes when " +
-        "`spark.kubernetes.allocation.pods.allocator` is set to 'direct'.")
+        "This configuration only takes effect on YARN and Kubernetes.")
       .version("3.5.0")
       .intConf
       .createOptional
@@ -1032,8 +1031,8 @@ package object config {
   private[spark] val EXECUTOR_ATTEMPT_FAILURE_VALIDITY_INTERVAL_MS =
     ConfigBuilder("spark.executor.failuresValidityInterval")
       .doc("Interval after which executor failures will be considered 
independent and not " +
-        "accumulate towards the attempt count. This configuration only takes 
effect on YARN, " +
-        "or Kubernetes when `spark.kubernetes.allocation.pods.allocator` is 
set to 'direct'.")
+        "accumulate towards the attempt count. This configuration only takes 
effect on YARN " +
+        "and Kubernetes.")
       .version("3.5.0")
       .timeConf(TimeUnit.MILLISECONDS)
       .createOptional
diff --git a/docs/configuration.md b/docs/configuration.md
index 6957ca9a03d2..4a85c4f256a9 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -565,8 +565,7 @@ of the most common options to set are:
   <td>numExecutors * 2, with minimum of 3</td>
   <td>
     The maximum number of executor failures before failing the application.
-    This configuration only takes effect on YARN, or Kubernetes when 
-    <code>spark.kubernetes.allocation.pods.allocator</code> is set to 'direct'.
+    This configuration only takes effect on YARN and Kubernetes.
   </td>
   <td>3.5.0</td>
 </tr>
@@ -576,8 +575,7 @@ of the most common options to set are:
   <td>
     Interval after which executor failures will be considered independent and
     not accumulate towards the attempt count.
-    This configuration only takes effect on YARN, or Kubernetes when 
-    <code>spark.kubernetes.allocation.pods.allocator</code> is set to 'direct'.
+    This configuration only takes effect on YARN and Kubernetes.
   </td>
   <td>3.5.0</td>
 </tr>


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to