Github user rxin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/14544#discussion_r74109798
  
    --- Diff: docs/spark-standalone.md ---
    @@ -196,6 +196,21 @@ SPARK_MASTER_OPTS supports the following system 
properties:
       </td>
     </tr>
     <tr>
    +  <td><code>spark.deploy.maxExecutorRetries</code></td>
    +  <td>10</td>
    +  <td>
    +    Limit on the maximum number of back-to-back executor failures that can 
occur before the
    +    standalone cluster manager removes a faulty application. An 
application will never be removed
    +    if it has any running executors. If an application experiences more 
than
    +    <code>spark.deploy.maxExecutorRetries</code> failures in a row, no 
executors
    +    successfully start running in between those failures, and the 
application has no running
    +    executors then the standalone cluster manager will remove the 
application and mark it as failed.
    +    To disable this automatic removal, set 
<code>spark.deploy.maxExecutorRetries</code> to
    +    <code>-1</code>
    --- End diff --
    
    add a period at the end.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to