Github user andrewor14 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2577#discussion_r18249133
  
    --- Diff: 
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala 
---
    @@ -328,10 +348,18 @@ private[spark] class ApplicationMaster(args: 
ApplicationMasterArguments,
       private def waitForSparkDriver(): ActorRef = {
         logInfo("Waiting for Spark driver to be reachable.")
         var driverUp = false
    +    var count = 0
         val hostport = args.userArgs(0)
         val (driverHost, driverPort) = Utils.parseHostPort(hostport)
    -    while (!driverUp) {
    +
    +    // spark driver should already be up since it launched us, but we 
don't want to
    +    // wait forever, so wait 100 seconds max to match the cluster mode 
setting.
    +    // Leave this config unpublished for now.
    +    val numTries = 
sparkConf.getInt("spark.yarn.ApplicationMaster.client.waitTries", 1000)
    --- End diff --
    
    It's kind of inconsistent to use `applicationMaster.client.waitTries` for 
client mode but `applicationMaster.waitTries` for cluster mode, and the 
existing documentation for the latter makes no mention of cluster mode even 
though it's only used there. It's fine to keep the `client` config here but we 
should make the other one `applicationMaster.cluster.waitTries` in a future 
JIRA and deprecate the less specific one.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to