Github user Stibbons commented on the issue:

    https://github.com/apache/spark/pull/14963
  
    You and I agree, actually:
    - PySpark can run inside Anaconda, and indeed this is greatly valuable. 
This will make available to the "driver" all the package provided by Anaconda 
(in client mode)
    - Making the same environment available for executor is a bit messy for the 
moment. Here we have an NFS share that every executor use, but having it 
automatically deployed would be so cool. that's why the other PR :)
    
    For development, PySpark will still use the anaconda environment if the 
user wishes so. Just for the `lint-python` execution, we revert back to a 
virtualenv, but only for the duration of the script


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to