Github user tgravescs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/5297#discussion_r27597016
  
    --- Diff: yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala ---
    @@ -66,6 +66,8 @@ private[spark] class Client(
       private val executorMemoryOverhead = args.executorMemoryOverhead // MB
       private val distCacheMgr = new ClientDistributedCacheManager()
       private val isClusterMode = args.isClusterMode
    +  private val fireAndForget = isClusterMode &&
    +    sparkConf.getBoolean("spark.yarn.cluster.quiet", false)
    --- End diff --
    
    right now we don't have any configs that are .cluster. or .client.   In 
other places we do reference .am., .driver., etc.   That matches other spark 
configs that are .driver. and .executor.  So I think .client. would fit the 
current naming conventions where it means the client and not client mode.  That 
doesn't mean it couldn't be confusing to some users though.  We could try 
something like spark.yarn.submit.[waitForCompletion|waitAppCompletion] .  
opinions on that?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to