Github user shivaram commented on the pull request:

    https://github.com/apache/spark/pull/7139#issuecomment-126750717
  
    Ah I see - so this affects things like the YARN cluster mode where the 
spark-submit script and the driver are run on different machines ? How does 
this work out for Python packages ? I'm just asking these questions as making a 
change to SparkContext initialization is pretty intrusive and I'm seeing if we 
can avoid that 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to