Hi all,

I have cluster with HDP 2.0. I built Spark 1.0 on edge node and trying to
run with a command
./bin/spark-submit --class test.etl.RunETL --master yarn-cluster
--num-executors 14 --driver-memory 3200m --executor-memory 3g
--executor-cores 2 my-etl-1.0-SNAPSHOT-hadoop2.2.0.jar

in result I got failed YARN application with following stack trace

Application application_1404481778533_0068 failed 3 times due to AM
Container for appattempt_1404481778533_0068_000003 exited with exitCode: 1
due to: Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
at org.apache.hadoop.util.Shell.run(Shell.java:379)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
 .Failing this attempt.. Failing the application

Log Type: stderr

Log Length: 686

Unknown/unsupported param List(--executor-memory, 3072,
--executor-cores, 2, --num-executors, 14)
Usage: org.apache.spark.deploy.yarn.ApplicationMaster [options]
Options:
  --jar JAR_PATH       Path to your application's JAR file (required)
  --class CLASS_NAME   Name of your application's main class (required)
  --args ARGS          Arguments to be passed to your application's main class.
                       Mutliple invocations are possible, each will be
passed in order.
  --num-workers NUM    Number of workers to start (Default: 2)
  --worker-cores NUM   Number of cores for the workers (Default: 1)
  --worker-memory MEM  Memory per Worker (e.g. 1000M, 2G) (Default: 1G)


Seems like the old spark notation.... any ideas?

Thank you,
Konstantin Kudryavtsev

Reply via email to