tgravescs commented on pull request #29770:
URL: https://github.com/apache/spark/pull/29770#issuecomment-693399056


   this is an interesting issue.  One of the issues is how does spark submit 
properly know what all arguments are supported by that cluster manager. Similar 
what deployment modes are supported. there is a lot of cluster manager specific 
logic in here and this may work for you for most things but I would be 
surprised if it worked for all things.
   
   Did you test this with both spark-submit and the interactive shells 
(spark-shell, pyspark, etc)?  I'm not sure if you cluster manager supports full 
cluster mode or not vs running driver locally.
   
   I think if we officially want to support this we need something else, some 
parts would need to be pluggable.  I think that is going to be a whole lot more 
change though. 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to