Makes sense. If it has to be at least 1 then the logic must be >=. At n
failures the next retry would be the nth and that is when it must error.
LGTM.

On Thu, Jun 4, 2015, 12:26 AM darabos <[email protected]> wrote:

> Github user darabos commented on the pull request:
>
>     https://github.com/apache/spark/pull/6621#issuecomment-108632096
>
>     Yes, it's not the intuitive definition for me either. But it's in
> http://spark.apache.org/docs/latest/configuration.html:
>
>     > Number of individual task failures before giving up on the job.
> Should be greater than or equal to 1. Number of allowed retries = this
> value - 1.
>
>
> ---
> If your project is set up for it, you can reply to this email and have your
> reply appear on GitHub as well. If your project does not have this feature
> enabled and wishes so, or if the feature is enabled but not working, please
> contact infrastructure at [email protected] or file a JIRA ticket
> with INFRA.
> ---
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>
>

Reply via email to