I've been trying to set up the latest stable version of Spark 3.0 on a hadoop cluster using yarn.  When running spark-submit in client mode, I always got an error of org.apache.spark.deploy.yarn.ExecutorLauncher not found.  This happened when I preload the spark jar files onto HDFS and specified the spark.yarn.jars property to the HDFS address (i.e. set spark.yarn.jars to hdfs:///spark-3/jars or hdfs://namenode:8020/spark-3/jars).  I've checked the /spark-3/jars directory on HDFS and all the jar files are accessible.  The exception messages are listed below.

This problem won't occur when I commended out the spark.yarn.jars line in the spark-defaults.conf file.  spark-submit finishes without any problems.

Any ideas what I have done wrong?  Thanks!

-- ND

======================================================================

Exception in thread "main" org.apache.spark.SparkException: Application application_1594664166056_0005 failed 2 times due to AM Container for appattempt_1594664166056_0005_000002 exited with exitCode: 1 Failing this attempt.Diagnostics: [2020-07-13 20:07:20.882]Exception from container-launch.
Container id: container_1594664166056_0005_02_000001
Exit code: 1

[2020-07-13 20:07:20.886]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
Error: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher


Reply via email to