Github user squito commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21068#discussion_r185570861
  
    --- Diff: 
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/config.scala 
---
    @@ -328,4 +328,19 @@ package object config {
         CACHED_FILES_TYPES,
         CACHED_CONF_ARCHIVE)
     
    +  /* YARN allocator-level blacklisting related config entries. */
    +  private[spark] val YARN_EXECUTOR_LAUNCH_BLACKLIST_ENABLED =
    +    ConfigBuilder("spark.yarn.executor.launch.blacklist.enabled")
    +      .booleanConf
    +      .createOptional
    +
    +  private[spark] val YARN_BLACKLIST_MAX_NODE_BLACKLIST_RATIO =
    +    ConfigBuilder("spark.yarn.blacklist.maxNodeBlacklistRatio")
    +      .doc("There is limit for the number of blacklisted nodes sent to 
YARN. " +
    +        "And it is calculated by multiplying the number of cluster nodes 
with this ratio.")
    --- End diff --
    
    I don't have very strong opinions about the naming, but I'd like both of 
these confs to have the same prefix.  If I had to chose, I'd go with 
"spark.yarn.blacklist".
    
    I'd update the doc to "The maximum fraction of the cluster nodes that will 
be blacklisted for yarn allocations, based on task & allocation failures".
    
    These are currently undocumented ... but I'm fine keeping it that way.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to