[
https://issues.apache.org/jira/browse/SPARK-13587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15212191#comment-15212191
]
Juliet Hougland commented on SPARK-13587:
-----------------------------------------
I really do think spark and pyspark needs to stay out of the business for
installing anything for people. A generic executable is relatively neutral as
to what exactly that executable does, which is good. Spark's scope should be
computation/execution, not environment setup and teardown.
Have you considered using NFS or Amazon EFS to allow users to create and manage
their own envs and then mounting those on worker/executor nodes? This is an
elegant solution we (many experienced people at Cloudera like Guru M and
Tristan Z recommended this) have seen deployed successfully. I believe given
the description of your problem it should suit your needs.
[~vanzin] as suggested "one alternative to shared mounts is to store the thing
in HDFS and use something like --files / --archives in Spark. The distribution
to new containers is handled by YARN, and Spark just would need some
adjustments to find the right executable inside those archives."
> Support virtualenv in PySpark
> -----------------------------
>
> Key: SPARK-13587
> URL: https://issues.apache.org/jira/browse/SPARK-13587
> Project: Spark
> Issue Type: New Feature
> Components: PySpark
> Reporter: Jeff Zhang
>
> Currently, it's not easy for user to add third party python packages in
> pyspark.
> * One way is to using --py-files (suitable for simple dependency, but not
> suitable for complicated dependency, especially with transitive dependency)
> * Another way is install packages manually on each node (time wasting, and
> not easy to switch to different environment)
> Python has now 2 different virtualenv implementation. One is native
> virtualenv another is through conda. This jira is trying to migrate these 2
> tools to distributed environment
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]