[ 
https://issues.apache.org/jira/browse/SPARK-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14069479#comment-14069479
 ] 

Nan Zhu commented on SPARK-2454:
--------------------------------

there is a related issue and fix 

https://issues.apache.org/jira/browse/SPARK-2404

https://github.com/apache/spark/pull/1331

where we should not overwrite SPARK_HOME in spark-submit and spark-class if the 
user has set those two values

in our scenario, the remote cluster has the same user  with the login portal, 
say codingcat, 

so the SPARK_HOME is set to /home/codingcat/spark-1.0, other users in the login 
portal has a soft link to SPARK_HOME in their own directory, 

however the current scripts overwrite the already-set SPARK_HOME to the pwd 
running spark-submit, which does not exist in the cluster, causing the 
exceptions like "cannot run /home/local_user/spark-class"....


 

> Separate driver spark home from executor spark home
> ---------------------------------------------------
>
>                 Key: SPARK-2454
>                 URL: https://issues.apache.org/jira/browse/SPARK-2454
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.0.1
>            Reporter: Andrew Or
>             Fix For: 1.1.0
>
>
> The driver may not always share the same directory structure as the 
> executors. It makes little sense to always re-use the driver's spark home on 
> the executors.
> https://github.com/apache/spark/pull/1244/ is an open effort to fix this. 
> However, this still requires us to set SPARK_HOME on all the executor nodes. 
> Really we should separate this out into something like `spark.executor.home` 
> and `spark.driver.home` rather than re-using SPARK_HOME everywhere.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to