holdenk commented on issue #23715: [SPARK-26803][PYTHON] Add sbin subdirectory 
to pyspark
URL: https://github.com/apache/spark/pull/23715#issuecomment-461893187
 
 
   So I think the original intent of the packaging is less important than 
supporting the usecases that folks are trying to do.
   
   I'm hesitant to package more scripts mostly because we don't have any 
particular testing for them to make sure they will work when packaged in this 
way. That being said, if we could put a small test which makes sure the history 
server can be started  I can see how it would be useful for local debugging 
with PySpark and certainly shouldn't do too much harm.
   
   Perhaps we could also find a way to make `This Python packaged version of 
Spark is suitable for interacting with an existing cluster (be it Spark 
standalone, YARN, or Mesos) - but does not contain the tools required to set up 
your own standalone Spark cluster. You can download the full version of Spark 
from the [Apache Spark downloads 
page](http://spark.apache.org/downloads.html).` print as a warning for sbin 
PyPi packaged scripts so folks are less likely to use them incorrectly.
   
   What does everyone think?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to