Github user Stibbons commented on the issue:

    https://github.com/apache/spark/pull/14180
  
    This new version is meant to be rebased after #13599 is merged.
    
    Here is my current state:
    - only Standalone and YARN supported. Mesos not supported
    - only tested with virtualenv/pip. Conda not tested
    - wheelhouse deployment works (ie, all dependencies can be packaged into a 
single zip file and automatically and quickly installed on workers).
    - for example, deploying a package with numpy + pandas + scitkit-learn is 
fast once the installation has been done at least once on all workers, and if 
the wheelhouse provides all wheel on for all version, pip will install 
everything without internet connection and very fast)
    
    I'd like to have the same ability to specify the entry point in Python that 
we can do in Java/Scala with the `--class` argument of `spark-submit`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to