Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/2577#discussion_r18243719
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
---
@@ -328,10 +348,18 @@ private[spark] class ApplicationMaster(args:
ApplicationMasterArguments,
private def waitForSparkDriver(): ActorRef = {
logInfo("Waiting for Spark driver to be reachable.")
var driverUp = false
+ var count = 0
val hostport = args.userArgs(0)
val (driverHost, driverPort) = Utils.parseHostPort(hostport)
- while (!driverUp) {
+
+ // spark driver should already be up since it launched us, but we
don't want to
+ // wait forever, so wait 100 seconds max to match the cluster mode
setting.
+ // Leave this config unpublished for now.
+ val numTries =
sparkConf.getInt("spark.yarn.ApplicationMaster.client.waitTries", 1000)
--- End diff --
yes the client was tacked on to mean it used in the client mode because the
timing of the loops are different between the modes. Its an internal config
right now so user shouldn't be setting. The timing is different because client
mode is already up when this is launched, versus in cluster mode we are
launching the user code, which takes some times (10's of seconds).
I'll file a separate jira to fix up the mismatch in doc/config.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]