Github user squito commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17307#discussion_r106328854
  
    --- Diff: docs/configuration.md ---
    @@ -1506,6 +1506,11 @@ Apart from these, the following properties are also 
available, and may be useful
         of this setting is to act as a safety-net to prevent runaway 
uncancellable tasks from rendering
         an executor unusable.
       </td>
    +  <td><code>spark.stage.maxConsecutiveAttempts</code></td>
    +  <td>4</td>
    +  <td>
    +    Number of consecutive stage retries allowed before a stage is aborted.
    --- End diff --
    
    there is a off-by-one difference between "attempts" and "retries" -- eg. if 
this is set to 1, do you allow one retry, or do you give up after one attempt?  
I realize this is super minor but I remember dealing with confusion about this 
for task failures.  I don't think which one we use matters a ton, but the 
implementation here is "attempt", so how about just rewording the doc to 
"Number of consecutive stage attempts ...".
    
    Same goes for the comments which use "retries".
    
    (I see now this was wording was my fault, from making one suggestion in one 
place, and another elsewhere, sorry about that.)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to