Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/6671#discussion_r32435228
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -460,7 +460,9 @@ private[deploy] class SparkSubmitArguments(args:
Seq[String], env: Map[String, S
| on one of the worker machines
inside the cluster ("cluster")
| (Default: client).
| --class CLASS_NAME Your application's main class (for
Java / Scala apps).
- | --name NAME A name of your application.
+ | --name NAME A name of your application. In
yarn-cluster mode the name
--- End diff --
> SparkConf.set("spark.app.name", "foo") will not work in cluster-mode
applications
I know and that's not what I meant. I meant that even if we remove
`SparkConf.setAppName` people can still set the app name by setting the conf
directly, in client mode, and cause exactly the same discrepancy. So the only
option t really have the command line version take over is to have the concept
of a "final" config that cannot be overridden once it's set.
I don't know whether have something like that is worth it to fix this small
issue. It's better just to discourage people from setting the app name
programmatically.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]