thanks you-all patience help very much??i change the para spark.yarn.jar spark.yarn.jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar to spark.yarn.jar spark.yarn.jar hdfs://master:9000/user/shihj/spark_lib/spark-assembly-1.6.1-hadoop2.6.0.jar
then run well?? thanks you-all again?? ------------------ ???????? ------------------ ??????: "????????";<958943...@qq.com>; ????????: 2016??6??22??(??????) ????3:10 ??????: "Yash Sharma"<yash...@gmail.com>; ????: "Saisai Shao"<sai.sai.s...@gmail.com>; "user"<user@spark.apache.org>; ????: ?????? Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher yes??it run well shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$ ./bin/spark-submit \ > --class org.apache.spark.examples.SparkPi \ > --master local[4] \ > lib/spark-examples-1.6.1-hadoop2.6.0.jar 10 16/06/22 15:08:14 INFO SparkContext: Running Spark version 1.6.1 16/06/22 15:08:14 WARN SparkConf: SPARK_WORKER_INSTANCES was detected (set to '1'). This is deprecated in Spark 1.0+. Please instead use: - ./spark-submit with --num-executors to specify the number of executors - Or set SPARK_EXECUTOR_INSTANCES - spark.executor.instances to configure the number of instances in the spark config. 16/06/22 15:08:15 INFO SecurityManager: Changing view acls to: shihj 16/06/22 15:08:15 INFO SecurityManager: Changing modify acls to: shihj 16/06/22 15:08:15 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(shihj); users with modify permissions: Set(shihj) 16/06/22 15:08:16 INFO Utils: Successfully started service 'sparkDriver' on port 43865. 16/06/22 15:08:16 INFO Slf4jLogger: Slf4jLogger started 16/06/22 15:08:16 INFO Remoting: Starting remoting 16/06/22 15:08:17 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.20.137:39308] 16/06/22 15:08:17 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 39308. 16/06/22 15:08:17 INFO SparkEnv: Registering MapOutputTracker 16/06/22 15:08:17 INFO SparkEnv: Registering BlockManagerMaster 16/06/22 15:08:17 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-3195b7f2-126d-4734-a681-6ec00727352a 16/06/22 15:08:17 INFO MemoryStore: MemoryStore started with capacity 511.1 MB 16/06/22 15:08:17 INFO SparkEnv: Registering OutputCommitCoordinator 16/06/22 15:08:18 INFO Utils: Successfully started service 'SparkUI' on port 4040. 16/06/22 15:08:18 INFO SparkUI: Started SparkUI at http://192.168.20.137:4040 16/06/22 15:08:18 INFO HttpFileServer: HTTP File server directory is /tmp/spark-c91a579d-1a18-4f75-ae05-137d9a286080/httpd-961023ad-cc05-4e3e-b648-19581093df11 16/06/22 15:08:18 INFO HttpServer: Starting HTTP Server 16/06/22 15:08:18 INFO Utils: Successfully started service 'HTTP file server' on port 49924. 16/06/22 15:08:22 INFO SparkContext: Added JAR file:/usr/local/spark/spark-1.6.1-bin-hadoop2.6/lib/spark-examples-1.6.1-hadoop2.6.0.jar at http://192.168.20.137:49924/jars/spark-examples-1.6.1-hadoop2.6.0.jar with timestamp 1466579302122 16/06/22 15:08:22 INFO Executor: Starting executor ID driver on host localhost 16/06/22 15:08:22 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33520. 16/06/22 15:08:22 INFO NettyBlockTransferService: Server created on 33520 16/06/22 15:08:22 INFO BlockManagerMaster: Trying to register BlockManager 16/06/22 15:08:22 INFO BlockManagerMasterEndpoint: Registering block manager localhost:33520 with 511.1 MB RAM, BlockManagerId(driver, localhost, 33520) 16/06/22 15:08:22 INFO BlockManagerMaster: Registered BlockManager 16/06/22 15:08:23 INFO SparkContext: Starting job: reduce at SparkPi.scala:36 16/06/22 15:08:23 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:36) with 10 output partitions 16/06/22 15:08:23 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:36) 16/06/22 15:08:23 INFO DAGScheduler: Parents of final stage: List() 16/06/22 15:08:23 INFO DAGScheduler: Missing parents: List() 16/06/22 15:08:23 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32), which has no missing parents 16/06/22 15:08:23 WARN SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes 16/06/22 15:08:23 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1904.0 B, free 1904.0 B) 16/06/22 15:08:24 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1216.0 B, free 3.0 KB) 16/06/22 15:08:24 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:33520 (size: 1216.0 B, free: 511.1 MB) 16/06/22 15:08:24 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006 16/06/22 15:08:24 INFO DAGScheduler: Submitting 10 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32) 16/06/22 15:08:24 INFO TaskSchedulerImpl: Adding task set 0.0 with 10 tasks 16/06/22 15:08:24 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 2157 bytes) 16/06/22 15:08:24 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, partition 1,PROCESS_LOCAL, 2157 bytes) 16/06/22 15:08:24 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, localhost, partition 2,PROCESS_LOCAL, 2157 bytes) 16/06/22 15:08:24 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, localhost, partition 3,PROCESS_LOCAL, 2157 bytes) 16/06/22 15:08:24 INFO Executor: Running task 2.0 in stage 0.0 (TID 2) 16/06/22 15:08:24 INFO Executor: Running task 0.0 in stage 0.0 (TID 0) 16/06/22 15:08:24 INFO Executor: Running task 3.0 in stage 0.0 (TID 3) 16/06/22 15:08:24 INFO Executor: Running task 1.0 in stage 0.0 (TID 1) 16/06/22 15:08:24 INFO Executor: Fetching http://192.168.20.137:49924/jars/spark-examples-1.6.1-hadoop2.6.0.jar with timestamp 1466579302122 16/06/22 15:08:24 INFO Utils: Fetching http://192.168.20.137:49924/jars/spark-examples-1.6.1-hadoop2.6.0.jar to /tmp/spark-c91a579d-1a18-4f75-ae05-137d9a286080/userFiles-eff08530-4501-44c4-a6e8-9ac7709b2732/fetchFileTemp6247932809883110092.tmp 16/06/22 15:08:29 INFO Executor: Adding file:/tmp/spark-c91a579d-1a18-4f75-ae05-137d9a286080/userFiles-eff08530-4501-44c4-a6e8-9ac7709b2732/spark-examples-1.6.1-hadoop2.6.0.jar to class loader 16/06/22 15:08:29 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1031 bytes result sent to driver 16/06/22 15:08:29 INFO Executor: Finished task 2.0 in stage 0.0 (TID 2). 1031 bytes result sent to driver 16/06/22 15:08:29 INFO Executor: Finished task 3.0 in stage 0.0 (TID 3). 1031 bytes result sent to driver 16/06/22 15:08:29 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 1031 bytes result sent to driver 16/06/22 15:08:29 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, localhost, partition 4,PROCESS_LOCAL, 2157 bytes) 16/06/22 15:08:29 INFO TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, localhost, partition 5,PROCESS_LOCAL, 2157 bytes) 16/06/22 15:08:29 INFO Executor: Running task 4.0 in stage 0.0 (TID 4) 16/06/22 15:08:29 INFO Executor: Running task 5.0 in stage 0.0 (TID 5) 16/06/22 15:08:29 INFO TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, localhost, partition 6,PROCESS_LOCAL, 2157 bytes) 16/06/22 15:08:29 INFO Executor: Running task 6.0 in stage 0.0 (TID 6) 16/06/22 15:08:29 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 5690 ms on localhost (1/10) 16/06/22 15:08:29 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 5652 ms on localhost (2/10) 16/06/22 15:08:29 INFO TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, localhost, partition 7,PROCESS_LOCAL, 2157 bytes) 16/06/22 15:08:29 INFO Executor: Running task 7.0 in stage 0.0 (TID 7) 16/06/22 15:08:29 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 5688 ms on localhost (3/10) 16/06/22 15:08:29 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 5695 ms on localhost (4/10) 16/06/22 15:08:29 INFO Executor: Finished task 4.0 in stage 0.0 (TID 4). 1031 bytes result sent to driver 16/06/22 15:08:29 INFO TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, localhost, partition 8,PROCESS_LOCAL, 2157 bytes) 16/06/22 15:08:29 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 165 ms on localhost (5/10) 16/06/22 15:08:29 INFO Executor: Running task 8.0 in stage 0.0 (TID 8) 16/06/22 15:08:29 INFO Executor: Finished task 5.0 in stage 0.0 (TID 5). 1031 bytes result sent to driver 16/06/22 15:08:29 INFO Executor: Finished task 7.0 in stage 0.0 (TID 7). 1031 bytes result sent to driver 16/06/22 15:08:29 INFO TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, localhost, partition 9,PROCESS_LOCAL, 2157 bytes) 16/06/22 15:08:29 INFO Executor: Running task 9.0 in stage 0.0 (TID 9) 16/06/22 15:08:29 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 184 ms on localhost (6/10) 16/06/22 15:08:29 INFO TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 99 ms on localhost (7/10) 16/06/22 15:08:29 INFO Executor: Finished task 6.0 in stage 0.0 (TID 6). 1031 bytes result sent to driver 16/06/22 15:08:30 INFO TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 190 ms on localhost (8/10) 16/06/22 15:08:30 INFO Executor: Finished task 9.0 in stage 0.0 (TID 9). 1031 bytes result sent to driver 16/06/22 15:08:30 INFO TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 155 ms on localhost (9/10) 16/06/22 15:08:30 INFO Executor: Finished task 8.0 in stage 0.0 (TID 8). 1031 bytes result sent to driver 16/06/22 15:08:30 INFO TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 217 ms on localhost (10/10) 16/06/22 15:08:30 INFO DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:36) finished in 6.038 s 16/06/22 15:08:30 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 16/06/22 15:08:30 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:36, took 6.624021 s Pi is roughly 3.142184 16/06/22 15:08:30 INFO SparkUI: Stopped Spark web UI at http://192.168.20.137:4040 16/06/22 15:08:30 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 16/06/22 15:08:30 INFO MemoryStore: MemoryStore cleared 16/06/22 15:08:30 INFO BlockManager: BlockManager stopped 16/06/22 15:08:30 INFO BlockManagerMaster: BlockManagerMaster stopped 16/06/22 15:08:30 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 16/06/22 15:08:31 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. 16/06/22 15:08:31 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. 16/06/22 15:08:31 INFO SparkContext: Successfully stopped SparkContext 16/06/22 15:08:31 INFO ShutdownHookManager: Shutdown hook called 16/06/22 15:08:31 INFO ShutdownHookManager: Deleting directory /tmp/spark-c91a579d-1a18-4f75-ae05-137d9a286080 16/06/22 15:08:31 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down. 16/06/22 15:08:31 INFO ShutdownHookManager: Deleting directory /tmp/spark-c91a579d-1a18-4f75-ae05-137d9a286080/httpd-961023ad-cc05-4e3e-b648-19581093df11 shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$ ------------------ ???????? ------------------ ??????: "Yash Sharma";<yash...@gmail.com>; ????????: 2016??6??22??(??????) ????3:06 ??????: "????????"<958943...@qq.com>; ????: "Saisai Shao"<sai.sai.s...@gmail.com>; "user"<user@spark.apache.org>; ????: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher I cannot get a lot of info from these logs but it surely seems like yarn setup issue. Did you try the local mode to check if it works - ./bin/spark-submit \ --class org.apache.spark.examples.SparkPi \ --master local[4] \ spark-examples-1.6.1-hadoop2.6.0.jar 10 Note - the jar is a local one On Wed, Jun 22, 2016 at 4:50 PM, ???????? <958943...@qq.com> wrote: Application application_1466568126079_0006 failed 2 times due to AM Container for appattempt_1466568126079_0006_000002 exited with exitCode: 1 For more detailed output, check application tracking page:http://master:8088/proxy/application_1466568126079_0006/Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1466568126079_0006_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 Failing this attempt. Failing the application. but command get error shihj@master:~/workspace/hadoop-2.6.4$ yarn logs -applicationId application_1466568126079_0006 Usage: yarn [options] yarn: error: no such option: -a ------------------ ???????? ------------------ ??????: "Yash Sharma";<yash...@gmail.com>; ????????: 2016??6??22??(??????) ????2:46 ??????: "????????"<958943...@qq.com>; ????: "Saisai Shao"<sai.sai.s...@gmail.com>; "user"<user@spark.apache.org>; ????: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher Are you able to run anything else on the cluster, I suspect its yarn that not able to run the class. If you could just share the logs in pastebin we could confirm that. On Wed, Jun 22, 2016 at 4:43 PM, ???????? <958943...@qq.com> wrote: i want to avoid Uploading resource file ??especially jar package????because them very big??the application will wait for too long??there are good method???? so i config that para?? but not get the my want to effect?? ------------------ ???????? ------------------ ??????: "Yash Sharma";<yash...@gmail.com>; ????????: 2016??6??22??(??????) ????2:34 ??????: "????????"<958943...@qq.com>; ????: "Saisai Shao"<sai.sai.s...@gmail.com>; "user"<user@spark.apache.org>; ????: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher Try with : --master yarn-cluster On Wed, Jun 22, 2016 at 4:30 PM, ???????? <958943...@qq.com> wrote: ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2 hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10 Warning: Skip remote jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar. java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at org.apache.spark.util.Utils$.classForName(Utils.scala:174) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) ------------------ ???????? ------------------ ??????: "Yash Sharma";<yash...@gmail.com>; ????????: 2016??6??22??(??????) ????2:28 ??????: "????????"<958943...@qq.com>; ????: "Saisai Shao"<sai.sai.s...@gmail.com>; "user"<user@spark.apache.org>; ????: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher Or better , try the master as yarn-cluster, ./bin/spark-submit \ --class org.apache.spark.examples.SparkPi \ --master yarn-cluster \ --driver-memory 512m \ --num-executors 2 \ --executor-memory 512m \ --executor-cores 2 \ hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar On Wed, Jun 22, 2016 at 4:27 PM, ???????? <958943...@qq.com> wrote: Is it able to run on local mode ? what mean?? standalone mode ? ------------------ ???????? ------------------ ??????: "Yash Sharma";<yash...@gmail.com>; ????????: 2016??6??22??(??????) ????2:18 ??????: "Saisai Shao"<sai.sai.s...@gmail.com>; ????: "????????"<958943...@qq.com>; "user"<user@spark.apache.org>; ????: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher Try providing the jar with the hdfs prefix. Its probably just because its not able to find the jar on all nodes. hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar Is it able to run on local mode ? On Wed, Jun 22, 2016 at 4:14 PM, Saisai Shao <sai.sai.s...@gmail.com> wrote: spark.yarn.jar(none)The location of the Spark jar file, in case overriding the default location is desired. By default, Spark on YARN will use a Spark jar installed locally, but the Spark jar can also be in a world-readable location on HDFS. This allows YARN to cache it on nodes so that it doesn't need to be distributed each time an application runs. To point to a jar on HDFS, for example, set this configuration to hdfs:///some/path. spark.yarn.jar is used for spark run-time system jar, which is spark assembly jar, not the application jar (example-assembly jar). So in your case you upload the example-assembly jar into hdfs, in which spark system jars are not packed, so ExecutorLaucher cannot be found. Thanks Saisai On Wed, Jun 22, 2016 at 2:10 PM, ???????? <958943...@qq.com> wrote: shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2 /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10 Warning: Local jar /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not exist, skipping. java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at org.apache.spark.util.Utils$.classForName(Utils.scala:174) at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689) at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181) at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206) at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121) at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) get error at once ------------------ ???????? ------------------ ??????: "Yash Sharma";<yash...@gmail.com>; ????????: 2016??6??22??(??????) ????2:04 ??????: "????????"<958943...@qq.com>; ????: "user"<user@spark.apache.org>; ????: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher How about supplying the jar directly in spark submit - ./bin/spark-submit \ --class org.apache.spark.examples.SparkPi \ --master yarn-client \ --driver-memory 512m \ --num-executors 2 \ --executor-memory 512m \ --executor-cores 2 \ /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar On Wed, Jun 22, 2016 at 3:59 PM, ???????? <958943...@qq.com> wrote: i config this para at spark-defaults.conf spark.yarn.jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar then ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2 10: Error: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher but i don't config that para ,there no error why???that para is only avoid Uploading resource file(jar package)??