Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/3557#issuecomment-69432664
Although in general we should honor Spark properties over environment
variables, the app name has been a special case and should remain so for
backward compatibility. For this PR, I think the goal is to maintain behavior
in the "before" table by making more changes in `YarnClientSchedulerBackend`.
Additionally, it is not intuitive that if you set both
`SPARK_YARN_APP_NAME` and `spark.app.name`, the behavior is inconsistent
between client mode and cluster mode. I think the app name should be a special
case for both deploy modes, but we can fix that in a separate PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]