This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 7a43de193aa [SPARK-45969][DOCS] Document configuration change of 
executor failure tracker
7a43de193aa is described below

commit 7a43de193aa5a0856e098088728dccea37f169c5
Author: Cheng Pan <cheng...@apache.org>
AuthorDate: Sun Dec 10 14:03:37 2023 -0800

    [SPARK-45969][DOCS] Document configuration change of executor failure 
tracker
    
    ### What changes were proposed in this pull request?
    
    It's a follow-up of SPARK-41210 (use a new JIRA ticket because it was 
released in 3.5.0), this PR updates docs/migration guide about configuration 
change of executor failure tracker
    
    ### Why are the changes needed?
    
    Docs update is missing in previous changes, also is requested 
https://github.com/apache/spark/commit/40872e9a094f8459b0b6f626937ced48a8d98efb#r132516892
 by tgravescs
    
    ### Does this PR introduce _any_ user-facing change?
    
    Yes, docs changed
    
    ### How was this patch tested?
    
    Review
    
    ### Was this patch authored or co-authored using generative AI tooling?
    
    No
    
    Closes #43863 from pan3793/SPARK-45969.
    
    Authored-by: Cheng Pan <cheng...@apache.org>
    Signed-off-by: Dongjoon Hyun <dh...@apple.com>
---
 .../org/apache/spark/internal/config/package.scala  |  4 ++--
 docs/configuration.md                               | 21 +++++++++++++++++++++
 docs/core-migration-guide.md                        |  6 ++++++
 docs/running-on-yarn.md                             | 17 -----------------
 4 files changed, 29 insertions(+), 19 deletions(-)

diff --git a/core/src/main/scala/org/apache/spark/internal/config/package.scala 
b/core/src/main/scala/org/apache/spark/internal/config/package.scala
index 2c710e6025d..2823b7cdb60 100644
--- a/core/src/main/scala/org/apache/spark/internal/config/package.scala
+++ b/core/src/main/scala/org/apache/spark/internal/config/package.scala
@@ -931,7 +931,7 @@ package object config {
 
   private[spark] val MAX_EXECUTOR_FAILURES =
     ConfigBuilder("spark.executor.maxNumFailures")
-      .doc("Spark exits if the number of failed executors exceeds this 
threshold. " +
+      .doc("The maximum number of executor failures before failing the 
application. " +
         "This configuration only takes effect on YARN, or Kubernetes when " +
         "`spark.kubernetes.allocation.pods.allocator` is set to 'direct'.")
       .version("3.5.0")
@@ -940,7 +940,7 @@ package object config {
 
   private[spark] val EXECUTOR_ATTEMPT_FAILURE_VALIDITY_INTERVAL_MS =
     ConfigBuilder("spark.executor.failuresValidityInterval")
-      .doc("Interval after which Executor failures will be considered 
independent and not " +
+      .doc("Interval after which executor failures will be considered 
independent and not " +
         "accumulate towards the attempt count. This configuration only takes 
effect on YARN, " +
         "or Kubernetes when `spark.kubernetes.allocation.pods.allocator` is 
set to 'direct'.")
       .version("3.5.0")
diff --git a/docs/configuration.md b/docs/configuration.md
index f261e3b2deb..b45d647fde8 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -522,6 +522,27 @@ of the most common options to set are:
   </td>
   <td>3.2.0</td>
 </tr>
+<tr>
+  <td><code>spark.executor.maxNumFailures</code></td>
+  <td>numExecutors * 2, with minimum of 3</td>
+  <td>
+    The maximum number of executor failures before failing the application.
+    This configuration only takes effect on YARN, or Kubernetes when 
+    `spark.kubernetes.allocation.pods.allocator` is set to 'direct'.
+  </td>
+  <td>3.5.0</td>
+</tr>
+<tr>
+  <td><code>spark.executor.failuresValidityInterval</code></td>
+  <td>(none)</td>
+  <td>
+    Interval after which executor failures will be considered independent and
+    not accumulate towards the attempt count.
+    This configuration only takes effect on YARN, or Kubernetes when 
+    `spark.kubernetes.allocation.pods.allocator` is set to 'direct'.
+  </td>
+  <td>3.5.0</td>
+</tr>
 </table>
 
 Apart from these, the following properties are also available, and may be 
useful in some situations:
diff --git a/docs/core-migration-guide.md b/docs/core-migration-guide.md
index 09ba4b474e9..179b0b3fae1 100644
--- a/docs/core-migration-guide.md
+++ b/docs/core-migration-guide.md
@@ -32,6 +32,12 @@ license: |
 
 - In Spark 4.0, support for Apache Mesos as a resource manager was removed.
 
+## Upgrading from Core 3.4 to 3.5
+
+- Since Spark 3.5, `spark.yarn.executor.failuresValidityInterval` is 
deprecated. Use `spark.executor.failuresValidityInterval` instead.
+
+- Since Spark 3.5, `spark.yarn.max.executor.failures` is deprecated. Use 
`spark.executor.maxNumFailures` instead.
+
 ## Upgrading from Core 3.3 to 3.4
 
 - Since Spark 3.4, Spark driver will own `PersistentVolumnClaim`s and try to 
reuse if they are not assigned to live executors. To restore the behavior 
before Spark 3.4, you can set 
`spark.kubernetes.driver.ownPersistentVolumeClaim` to `false` and 
`spark.kubernetes.driver.reusePersistentVolumeClaim` to `false`.
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index 52afb178a51..3dfa63e1cb2 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -291,14 +291,6 @@ To use a custom metrics.properties for the application 
master and executors, upd
   </td>
   <td>1.4.0</td>
 </tr>
-<tr>
-  <td><code>spark.yarn.max.executor.failures</code></td>
-  <td>numExecutors * 2, with minimum of 3</td>
-  <td>
-    The maximum number of executor failures before failing the application.
-  </td>
-  <td>1.0.0</td>
-</tr>
 <tr>
   <td><code>spark.yarn.historyServer.address</code></td>
   <td>(none)</td>
@@ -499,15 +491,6 @@ To use a custom metrics.properties for the application 
master and executors, upd
   </td>
   <td>3.3.0</td>
 </tr>
-<tr>
-  <td><code>spark.yarn.executor.failuresValidityInterval</code></td>
-  <td>(none)</td>
-  <td>
-  Defines the validity interval for executor failure tracking.
-  Executor failures which are older than the validity interval will be ignored.
-  </td>
-  <td>2.0.0</td>
-</tr>
 <tr>
   <td><code>spark.yarn.submit.waitAppCompletion</code></td>
   <td><code>true</code></td>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to