Github user Devian-ua commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16819#discussion_r102014977
  
    --- Diff: 
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala 
---
    @@ -1193,6 +1189,37 @@ private[spark] class Client(
           }
       }
     
    +  def init(): Unit = {
    +    launcherBackend.connect()
    +    // Setup the credentials before doing anything else,
    +    // so we have don't have issues at any point.
    +    setupCredentials()
    +    yarnClient.init(yarnConf)
    +    yarnClient.start()
    +
    +    setMaxNumExecutors()
    +  }
    +
    +  /**
    +   * If using dynamic allocation and user doesn't set 
spark.dynamicAllocation.maxExecutors
    +   * then set the max number of executors depends on yarn cluster VCores 
Total.
    +   * If not using dynamic allocation don't set it.
    +   */
    +  private def setMaxNumExecutors(): Unit = {
    +    if (Utils.isDynamicAllocationEnabled(sparkConf)) {
    +
    +      val defaultMaxNumExecutors = 
DYN_ALLOCATION_MAX_EXECUTORS.defaultValue.get
    +      if (defaultMaxNumExecutors == 
sparkConf.get(DYN_ALLOCATION_MAX_EXECUTORS)) {
    +        val executorCores = sparkConf.getInt("spark.executor.cores", 1)
    +        val maxNumExecutors = yarnClient.getNodeReports().asScala.
    --- End diff --
    
     Shouldn't we take queue's maxResources amount into account from 
[ResourceManager REST 
APIs](https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html)?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to