I’m trying to upgrade a Spark project, written in Scala, from Spark 1.2.1 to 
1.3.0, so I changed my `build.sbt` like so:

   -libraryDependencies += "org.apache.spark" %% "spark-core" % "1.2.1" % 
“provided"
   +libraryDependencies += "org.apache.spark" %% "spark-core" % "1.3.0" % 
"provided"

then make an `assembly` jar, and submit it:

   HADOOP_CONF_DIR=/etc/hadoop/conf \
        spark-submit \
        --driver-class-path=/etc/hbase/conf \
        --conf spark.hadoop.validateOutputSpecs=false \
        --conf 
spark.yarn.jar=hdfs:/apps/local/spark-assembly-1.3.0-hadoop2.4.0.jar \
        --conf spark.serializer=org.apache.spark.serializer.KryoSerializer \
        --deploy-mode=cluster \
        --master=yarn \
        --class=TestObject \
        --num-executors=54 \
        target/scala-2.11/myapp-assembly-1.2.jar

The job fails to submit, with the following exception in the terminal:

    15/03/19 10:30:07 INFO yarn.Client:
client token: N/A
diagnostics: Application application_1420225286501_4699 failed 2 times due to 
AM Container for appattempt_1420225286501_4699_000002 exited with  exitCode: 
127 due to: Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
at org.apache.hadoop.util.Shell.run(Shell.java:379)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)

Finally, I go and check the YARN app master’s web interface (since the job is 
shown, I know it at least made it that far), and the only logs it shows are 
these:

        Log Type: stderr
        Log Length: 61
        /bin/bash: {{JAVA_HOME}}/bin/java: No such file or directory

        Log Type: stdout
        Log Length: 0

I’m not sure how to interpret that – is '{{JAVA_HOME}}' a literal (including 
the brackets) that’s somehow making it into a script?  Is this coming from the 
worker nodes or the driver?  Anything I can do to experiment & troubleshoot?

  -Ken



________________________________

CONFIDENTIALITY NOTICE: This e-mail message is for the sole use of the intended 
recipient(s) and may contain confidential and privileged information. Any 
unauthorized review, use, disclosure or distribution of any kind is strictly 
prohibited. If you are not the intended recipient, please contact the sender 
via reply e-mail and destroy all copies of the original message. Thank you.

Reply via email to