[ 
https://issues.apache.org/jira/browse/SPARK-2290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14054633#comment-14054633
 ] 

Patrick Wendell commented on SPARK-2290:
----------------------------------------

I updated the description here. It is indeed pretty strange that we ship this 
to the cluster when it's always available there anyways (since the Worker has 
it's own sparkHome anyways). So we should just remove it.

> Do not send SPARK_HOME from workers to executors
> ------------------------------------------------
>
>                 Key: SPARK-2290
>                 URL: https://issues.apache.org/jira/browse/SPARK-2290
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: YanTang Zhai
>            Assignee: Patrick Wendell
>
> The client path is /data/home/spark/test/spark-1.0.0 while the worker deploy 
> path is /data/home/spark/spark-1.0.0 which is different from the client path. 
> Then an application is launched using the ./bin/spark-submit --class 
> JobTaskJoin --master spark://172.25.38.244:7077 --executor-memory 128M 
> ../jobtaskjoin_2.10-1.0.0.jar. However the application is failed because an 
> exception occurs at 
> java.io.IOException: Cannot run program 
> "/data/home/spark/test/spark-1.0.0-bin-0.20.2-cdh3u3/bin/compute-classpath.sh"
>  (in directory "."): error=2, No such file or directory
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
>         at org.apache.spark.util.Utils$.executeAndGetOutput(Utils.scala:759)
>         at 
> org.apache.spark.deploy.worker.CommandUtils$.buildJavaOpts(CommandUtils.scala:72)
>         at 
> org.apache.spark.deploy.worker.CommandUtils$.buildCommandSeq(CommandUtils.scala:37)
>         at 
> org.apache.spark.deploy.worker.ExecutorRunner.getCommandSeq(ExecutorRunner.scala:109)
>         at 
> org.apache.spark.deploy.worker.ExecutorRunner.fetchAndRunExecutor(ExecutorRunner.scala:124)
>         at 
> org.apache.spark.deploy.worker.ExecutorRunner$$anon$1.run(ExecutorRunner.scala:58)
> Caused by: java.io.IOException: error=2, No such file or directory
>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1021)
>         ... 6 more
> Therefore, I think worker should not use appDesc.sparkHome when 
> LaunchExecutor, Contrarily, worker could use its own sparkHome directly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to