Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/5478#issuecomment-93057067
@Sephiroth-Lin the point of running PySpark on YARN is that the user does
not have to install Spark on the slave machines. Instead, we package the python
files in the assembly jar, which is automatically shipped by YARN to all
containers.
This change assumes that the python files will already be present on the
slave machines, since `PYTHONPATH` reads from the local file system. I don't
believe this is a deployment requirement that we want to enforce, especially
since the user must now ensure all Spark python files are consistent across all
the machines (as they must do so in standalone mode).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]