Github user squito commented on a diff in the pull request:

    https://github.com/apache/spark/pull/8760#discussion_r43797266
  
    --- Diff: docs/configuration.md ---
    @@ -1141,6 +1141,48 @@ Apart from these, the following properties are also 
available, and may be useful
       </td>
     </tr>
     <tr>
    +  <td><code>spark.scheduler.blacklist.enabled</code></td>
    +  <td>true</td>
    +  <td>
    +    If set to ture, executor blacklist feature will be enabled to avoid 
allocte new task on bad executor. The logic to define bad executor is based on 
BlacklistStrategy which is also configurable.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.scheduler.executorTaskBlacklistTime</code></td>
    +  <td>0L</td>
    +  <td>
    +    The threshold to deside blacklist executor, if last failed time is 
older than current time subtract executorTaskBlacklistTime, then the executor 
could be removed from blacklist.
    +  </td>
    +</tr>
    +<tr>
    +  <td><code>spark.scheduler.blacklist.recoverPeriod</code></td>
    +  <td>60L</td>
    +  <td>
    +    The period between runnning blacklist recover process. 
    --- End diff --
    
    lets include a timeunit here too, so "60s", and change the text to "If 
executor blacklisting is enabled, this controls how often to check if executors 
can be returned to the pool of active executors."



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to