Github user NiharS commented on the issue:

    https://github.com/apache/spark/pull/22192
  
    You're right, I ran in local-cluster and it exited very quickly citing 
executors shutting down after not being able to find my test plugin. Although, 
the logs say that it does use a CoarseGrainedExecutorBackend:
    
    `18/09/05 12:03:20 INFO CoarseGrainedExecutorBackend: Connecting to driver: 
spark://CoarseGrainedScheduler@nihar-xxx:45767`
    
    unless you mean Yarn only uses that one command line option. I'm looking 
into how it reacts in regular standalone mode with different thread counts. 
Thanks for pointing this out!
    
    I'm looking up other spark-submit options that can be used to provide the 
jar to other nodes, but I'm not super hopeful it's going to work out. If it 
indeed doesn't, I'll start exploring other options once I figure out why the 
pyspark tests are failing


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to