Github user squito commented on a diff in the pull request:

    https://github.com/apache/spark/pull/14079#discussion_r70890504
  
    --- Diff: 
core/src/main/scala/org/apache/spark/internal/config/package.scala ---
    @@ -97,6 +97,49 @@ package object config {
         .toSequence
         .createWithDefault(Nil)
     
    +  // Blacklist confs
    +  private[spark] val BLACKLIST_ENABLED =
    +    ConfigBuilder("spark.scheduler.blacklist.enabled")
    +    .booleanConf
    +    .createOptional
    +
    +  private[spark] val MAX_TASK_ATTEMPTS_PER_NODE =
    +    ConfigBuilder("spark.blacklist.maxTaskAttemptsPerNode")
    --- End diff --
    
    @tgravescs at first I had changed this to be "maxFailedTasksPerNode" as you 
suggested, but then I realized the name was intentionally different.  The other 
configs are about how many *different* tasks fail on an executor before you 
blacklist that executor.  But this one is about about how many different 
*attempts* you can have of one *particular* task on a node, before you 
blacklist that node for that task.
    
    the original name was definitely not clear, so I've changed it here.  What 
do you think of this version?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to