This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-3.5
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.5 by this push:
     new cbaefe9cc6a [SPARK-45969][DOCS] Document configuration change of 
executor failure tracker
cbaefe9cc6a is described below

commit cbaefe9cc6a22c940728b6717aeaa51c7d550ddc
Author: Cheng Pan <cheng...@apache.org>
AuthorDate: Sun Dec 10 14:03:37 2023 -0800

    [SPARK-45969][DOCS] Document configuration change of executor failure 
tracker
    
    It's a follow-up of SPARK-41210 (use a new JIRA ticket because it was 
released in 3.5.0), this PR updates docs/migration guide about configuration 
change of executor failure tracker
    
    Docs update is missing in previous changes, also is requested 
https://github.com/apache/spark/commit/40872e9a094f8459b0b6f626937ced48a8d98efb#r132516892
 by tgravescs
    
    Yes, docs changed
    
    Review
    
    No
    
    Closes #43863 from pan3793/SPARK-45969.
    
    Authored-by: Cheng Pan <cheng...@apache.org>
    Signed-off-by: Dongjoon Hyun <dh...@apple.com>
    (cherry picked from commit 7a43de193aa5a0856e098088728dccea37f169c5)
    Signed-off-by: Dongjoon Hyun <dh...@apple.com>
---
 .../org/apache/spark/internal/config/package.scala  |  4 ++--
 docs/configuration.md                               | 21 +++++++++++++++++++++
 docs/core-migration-guide.md                        |  6 ++++++
 docs/running-on-yarn.md                             | 17 -----------------
 4 files changed, 29 insertions(+), 19 deletions(-)

diff --git a/core/src/main/scala/org/apache/spark/internal/config/package.scala 
b/core/src/main/scala/org/apache/spark/internal/config/package.scala
index 600cbf151e1..c5e23cae1f8 100644
--- a/core/src/main/scala/org/apache/spark/internal/config/package.scala
+++ b/core/src/main/scala/org/apache/spark/internal/config/package.scala
@@ -924,7 +924,7 @@ package object config {
 
   private[spark] val MAX_EXECUTOR_FAILURES =
     ConfigBuilder("spark.executor.maxNumFailures")
-      .doc("Spark exits if the number of failed executors exceeds this 
threshold. " +
+      .doc("The maximum number of executor failures before failing the 
application. " +
         "This configuration only takes effect on YARN, or Kubernetes when " +
         "`spark.kubernetes.allocation.pods.allocator` is set to 'direct'.")
       .version("3.5.0")
@@ -933,7 +933,7 @@ package object config {
 
   private[spark] val EXECUTOR_ATTEMPT_FAILURE_VALIDITY_INTERVAL_MS =
     ConfigBuilder("spark.executor.failuresValidityInterval")
-      .doc("Interval after which Executor failures will be considered 
independent and not " +
+      .doc("Interval after which executor failures will be considered 
independent and not " +
         "accumulate towards the attempt count. This configuration only takes 
effect on YARN, " +
         "or Kubernetes when `spark.kubernetes.allocation.pods.allocator` is 
set to 'direct'.")
       .version("3.5.0")
diff --git a/docs/configuration.md b/docs/configuration.md
index f79406c5b6d..645c3e8208a 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -514,6 +514,27 @@ of the most common options to set are:
   </td>
   <td>3.2.0</td>
 </tr>
+<tr>
+  <td><code>spark.executor.maxNumFailures</code></td>
+  <td>numExecutors * 2, with minimum of 3</td>
+  <td>
+    The maximum number of executor failures before failing the application.
+    This configuration only takes effect on YARN, or Kubernetes when 
+    `spark.kubernetes.allocation.pods.allocator` is set to 'direct'.
+  </td>
+  <td>3.5.0</td>
+</tr>
+<tr>
+  <td><code>spark.executor.failuresValidityInterval</code></td>
+  <td>(none)</td>
+  <td>
+    Interval after which executor failures will be considered independent and
+    not accumulate towards the attempt count.
+    This configuration only takes effect on YARN, or Kubernetes when 
+    `spark.kubernetes.allocation.pods.allocator` is set to 'direct'.
+  </td>
+  <td>3.5.0</td>
+</tr>
 </table>
 
 Apart from these, the following properties are also available, and may be 
useful in some situations:
diff --git a/docs/core-migration-guide.md b/docs/core-migration-guide.md
index 3f97a484e1a..36465cc3f4e 100644
--- a/docs/core-migration-guide.md
+++ b/docs/core-migration-guide.md
@@ -22,6 +22,12 @@ license: |
 * Table of contents
 {:toc}
 
+## Upgrading from Core 3.4 to 3.5
+
+- Since Spark 3.5, `spark.yarn.executor.failuresValidityInterval` is 
deprecated. Use `spark.executor.failuresValidityInterval` instead.
+
+- Since Spark 3.5, `spark.yarn.max.executor.failures` is deprecated. Use 
`spark.executor.maxNumFailures` instead.
+
 ## Upgrading from Core 3.3 to 3.4
 
 - Since Spark 3.4, Spark driver will own `PersistentVolumnClaim`s and try to 
reuse if they are not assigned to live executors. To restore the behavior 
before Spark 3.4, you can set 
`spark.kubernetes.driver.ownPersistentVolumeClaim` to `false` and 
`spark.kubernetes.driver.reusePersistentVolumeClaim` to `false`.
diff --git a/docs/running-on-yarn.md b/docs/running-on-yarn.md
index d577b70a680..9b4e59a119e 100644
--- a/docs/running-on-yarn.md
+++ b/docs/running-on-yarn.md
@@ -291,14 +291,6 @@ To use a custom metrics.properties for the application 
master and executors, upd
   </td>
   <td>1.4.0</td>
 </tr>
-<tr>
-  <td><code>spark.yarn.max.executor.failures</code></td>
-  <td>numExecutors * 2, with minimum of 3</td>
-  <td>
-    The maximum number of executor failures before failing the application.
-  </td>
-  <td>1.0.0</td>
-</tr>
 <tr>
   <td><code>spark.yarn.historyServer.address</code></td>
   <td>(none)</td>
@@ -499,15 +491,6 @@ To use a custom metrics.properties for the application 
master and executors, upd
   </td>
   <td>3.3.0</td>
 </tr>
-<tr>
-  <td><code>spark.yarn.executor.failuresValidityInterval</code></td>
-  <td>(none)</td>
-  <td>
-  Defines the validity interval for executor failure tracking.
-  Executor failures which are older than the validity interval will be ignored.
-  </td>
-  <td>2.0.0</td>
-</tr>
 <tr>
   <td><code>spark.yarn.submit.waitAppCompletion</code></td>
   <td><code>true</code></td>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to