Github user andrewor14 commented on the pull request:

    https://github.com/apache/spark/pull/1472#issuecomment-50700741
  
    UPDATE: I had a conversation with @pwendell about this. We came to the 
conclusion that there is really no benefit from having a mechanism to specify 
an executor home, at least for standalone mode. Even if we have multiple 
installations of Spark on the worker machines, we can pick which one to connect 
to by simply specifying a different Master. In either case, we should just use 
the Worker's current working directory as the executor's (or driver's, in the 
case of standalone-cluster mode) Spark home.
    
    I will make the relevant changes shortly. If I don't get to it by the 1.1 
code freeze, we should just merge in #1392 instead.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to