YanTang Zhai created SPARK-2290:
-----------------------------------
Summary: Worker should directly use its own sparkHome instead of
appDesc.sparkHome when LaunchExecutor
Key: SPARK-2290
URL: https://issues.apache.org/jira/browse/SPARK-2290
Project: Spark
Issue Type: Bug
Components: Spark Core
Reporter: YanTang Zhai
Priority: Minor
The client path is /data/home/spark/test/spark-1.0.0 while the worker deploy
path is /data/home/spark/spark-1.0.0 which is different from the client path.
Then an application is launched using the ./bin/spark-submit --class
JobTaskJoin --master spark://172.25.38.244:7077 --executor-memory 128M
../jobtaskjoin_2.10-1.0.0.jar. However the application is failed because an
exception occurs at
java.io.IOException: Cannot run program
"/data/home/spark/test/spark-1.0.0-bin-0.20.2-cdh3u3/bin/compute-classpath.sh"
(in directory "."): error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
at org.apache.spark.util.Utils$.executeAndGetOutput(Utils.scala:759)
at
org.apache.spark.deploy.worker.CommandUtils$.buildJavaOpts(CommandUtils.scala:72)
at
org.apache.spark.deploy.worker.CommandUtils$.buildCommandSeq(CommandUtils.scala:37)
at
org.apache.spark.deploy.worker.ExecutorRunner.getCommandSeq(ExecutorRunner.scala:109)
at
org.apache.spark.deploy.worker.ExecutorRunner.fetchAndRunExecutor(ExecutorRunner.scala:124)
at
org.apache.spark.deploy.worker.ExecutorRunner$$anon$1.run(ExecutorRunner.scala:58)
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1021)
... 6 more
Therefore, I think worker should not use appDesc.sparkHome when LaunchExecutor,
Contrarily, worker could use its own sparkHome directly.
--
This message was sent by Atlassian JIRA
(v6.2#6252)