Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/969#issuecomment-46027049
@vanzin I think you commented earlier on moving this to sparksubmit. Sorry
I had missed that. Do you have any objections to moving it closer inside yarn
code?
Side note - we can't remove the code from YarnClientSchedulerBackend.scala
because it handles the ENV variable and it also needs to keep the default of
looking in hdfs first for backwards compatibility.
@witgo can you add the configs to the yarn documentation
(docs/running-on-yarn.md). You can just use the same description that
spark-submit spits out:
archives: Comma separated list of archives to be extracted into the
working directory of each executor.
files: Comma-separated list of files to be placed in the working
directory of each executor.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---