Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/4142#issuecomment-77921047
This change seems reasonable to me. I had an offline conversation with
Marcelo about edge cases and there was nothing I could think of.
The biggest concern here is that we now require the edge node that the
client is submitting their app from to have the configuration files that the
app should run with. This is to some extent required already - the edge node
must already have the Hadoop configs for the ResourceManager address and
NameNode address so that it can submit apps to the cluster and add files to the
distributed cache. So I think this is a reasonable expectation. However, it
would good to call it out explicitly in the Spark on YARN doc.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]