Hi folks,

I am trying to submit a spark app via YARN REST API, by following this
tutorial from Hortonworks:
https://community.hortonworks.com/articles/28070/starting-spark-jobs-directly-via-yarn-rest-api.html
.

Here is the general flow: GET a new app ID, then POST a new app with the ID
and several parameters as a JSON object.

However, since Spark2 the JSON parameter format has changed whereas this
turtoial is about Spark1.6. More specifically, for Spark1.6, the Spark
assembly JAR, the app JAR, and the spark-yarn properties files are provided
as "local-resources" and cached files, according to the tutorial. But for
Spark2, there is no local-resource or cached file, and the Spark assembly
JAR and properties are provided as "resources". See more details on the two
logs attached here.

As a result, the Spark assembly JAR is not visible to the container of
YARN. Thus, although I was able to submit an app, it would always finish as
FAILED. More specifically, "Could not find or load main class
org.apache.spark.executor.CoarseGrainedExecutorBackend" would be complained
on the container log (see attachment), where the
"CoarseGrainedExecutorBackend" was used in the command to submit an app.

So, what is the correct JSON parameter format to be used to submit a Spark2
app via YARN REST API?

Thanks,

Kun
Log Type: AppMaster.stderr
Log Upload Time: Wed May 03 16:45:54 -0700 2017
Log Length: 24362
17/05/03 16:45:42 INFO SignalUtils: Registered signal handler for TERM
17/05/03 16:45:42 INFO SignalUtils: Registered signal handler for HUP
17/05/03 16:45:42 INFO SignalUtils: Registered signal handler for INT
17/05/03 16:45:43 INFO ApplicationMaster: Preparing Local resources
17/05/03 16:45:43 INFO ApplicationMaster: ApplicationAttemptId: 
appattempt_1493660524300_0103_000002
17/05/03 16:45:43 INFO SecurityManager: Changing view acls to: yarn,hdfs
17/05/03 16:45:43 INFO SecurityManager: Changing modify acls to: yarn,hdfs
17/05/03 16:45:43 INFO SecurityManager: Changing view acls groups to: 
17/05/03 16:45:43 INFO SecurityManager: Changing modify acls groups to: 
17/05/03 16:45:43 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users  with view permissions: Set(yarn, hdfs); 
groups with view permissions: Set(); users  with modify permissions: Set(yarn, 
hdfs); groups with modify permissions: Set()
17/05/03 16:45:43 INFO ApplicationMaster: Starting the user application in a 
separate Thread
17/05/03 16:45:43 INFO ApplicationMaster: Waiting for spark context 
initialization...
17/05/03 16:45:43 INFO SparkContext: Running Spark version 2.1.0
17/05/03 16:45:43 INFO SecurityManager: Changing view acls to: yarn,hdfs
17/05/03 16:45:43 INFO SecurityManager: Changing modify acls to: yarn,hdfs
17/05/03 16:45:43 INFO SecurityManager: Changing view acls groups to: 
17/05/03 16:45:43 INFO SecurityManager: Changing modify acls groups to: 
17/05/03 16:45:43 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users  with view permissions: Set(yarn, hdfs); 
groups with view permissions: Set(); users  with modify permissions: Set(yarn, 
hdfs); groups with modify permissions: Set()
17/05/03 16:45:44 INFO Utils: Successfully started service 'sparkDriver' on 
port 41447.
17/05/03 16:45:44 INFO SparkEnv: Registering MapOutputTracker
17/05/03 16:45:44 INFO SparkEnv: Registering BlockManagerMaster
17/05/03 16:45:44 INFO BlockManagerMasterEndpoint: Using 
org.apache.spark.storage.DefaultTopologyMapper for getting topology information
17/05/03 16:45:44 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
17/05/03 16:45:44 INFO DiskBlockManager: Created local directory at 
/hadoop/yarn/local/usercache/hdfs/appcache/application_1493660524300_0103/blockmgr-5231c30a-c7aa-475b-8b45-5a39d2f2bf8a
17/05/03 16:45:44 INFO MemoryStore: MemoryStore started with capacity 912.3 MB
17/05/03 16:45:44 INFO SparkEnv: Registering OutputCommitCoordinator
17/05/03 16:45:44 INFO log: Logging initialized @2511ms
17/05/03 16:45:44 INFO JettyUtils: Adding filter: 
org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
17/05/03 16:45:44 INFO Server: jetty-9.2.z-SNAPSHOT
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@3b40bd2c{/jobs,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@1307a150{/jobs/json,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@6ad020cc{/jobs/job,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@129030b2{/jobs/job/json,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@2c874611{/stages,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@2247ff0{/stages/json,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@6919e938{/stages/stage,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@7a9f4109{/stages/stage/json,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@6cbe3785{/stages/pool,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@4d440e1d{/stages/pool/json,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@6b326ea4{/storage,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@6b25edc2{/storage/json,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@763afd3c{/storage/rdd,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@1be32100{/storage/rdd/json,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@63dc3b96{/environment,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@27d391d2{/environment/json,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@6a14b09{/executors,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@4c881fc{/executors/json,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@6ca1a78{/executors/threadDump,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@3dac5672{/executors/threadDump/json,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@3159bbcd{/static,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@715e0b7b{/,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@2791c6f2{/api,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@78510bb4{/jobs/job/kill,null,AVAILABLE}
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@4aa50a0{/stages/stage/kill,null,AVAILABLE}
17/05/03 16:45:44 INFO ServerConnector: Started 
ServerConnector@59ae3d90{HTTP/1.1}{0.0.0.0:45477}
17/05/03 16:45:44 INFO Server: Started @2704ms
17/05/03 16:45:44 INFO Utils: Successfully started service 'SparkUI' on port 
45477.
17/05/03 16:45:44 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at 
http://172.16.191.74:45477
17/05/03 16:45:44 INFO YarnClusterScheduler: Created YarnClusterScheduler
17/05/03 16:45:44 INFO SchedulerExtensionServices: Starting Yarn extension 
services with app application_1493660524300_0103 and attemptId 
Some(appattempt_1493660524300_0103_000002)
17/05/03 16:45:44 INFO Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 36974.
17/05/03 16:45:44 INFO NettyBlockTransferService: Server created on 
172.16.191.74:36974
17/05/03 16:45:44 INFO BlockManager: Using 
org.apache.spark.storage.RandomBlockReplicationPolicy for block replication 
policy
17/05/03 16:45:44 INFO BlockManagerMaster: Registering BlockManager 
BlockManagerId(driver, 172.16.191.74, 36974, None)
17/05/03 16:45:44 INFO BlockManagerMasterEndpoint: Registering block manager 
172.16.191.74:36974 with 912.3 MB RAM, BlockManagerId(driver, 172.16.191.74, 
36974, None)
17/05/03 16:45:44 INFO BlockManagerMaster: Registered BlockManager 
BlockManagerId(driver, 172.16.191.74, 36974, None)
17/05/03 16:45:44 INFO BlockManager: Initialized BlockManager: 
BlockManagerId(driver, 172.16.191.74, 36974, None)
17/05/03 16:45:44 INFO ContextHandler: Started 
o.s.j.s.ServletContextHandler@7c02f314{/metrics/json,null,AVAILABLE}
17/05/03 16:45:45 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: 
ApplicationMaster registered as 
NettyRpcEndpointRef(spark://YarnAM@172.16.191.74:41447)
17/05/03 16:45:45 INFO ApplicationMaster: 
===============================================================================
YARN executor launch context:
  env:
    CLASSPATH -> 
{{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>/usr/iop/current/hadoop-client/*<CPS>/usr/iop/current/hadoop-client/lib/*<CPS>/usr/iop/current/hadoop-hdfs-client/*<CPS>/usr/iop/current/hadoop-hdfs-client/lib/*<CPS>/usr/iop/current/hadoop-yarn-client/*<CPS>/usr/iop/current/hadoop-yarn-client/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/usr/iop/current/ext/hadoop/*<CPS>/etc/hadoop/conf/:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/iop/4.3.0.0/hadoop/lib/hadoop-lzo-0.6.0.4.3.0.0.jar:/etc/hadoop/conf/secure:/usr/lib/hadoop-lzo/lib/*:/usr/iop/current/ext/hadoop/*
    SPARK_YARN_CACHE_FILES_FILE_SIZES -> 8616,225643850
    SPARK_USER -> hdfs
    SPARK_YARN_CACHE_FILES_VISIBILITIES -> PUBLIC,PRIVATE
    SPARK_YARN_MODE -> true
    SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1493670676756,1493660538918
    SPARK_YARN_CACHE_FILES -> 
hdfs://abc1.fyre.ibm.com:8020/sampleyarn/wordcount.jar#__app__.jar,hdfs://abc1.fyre.ibm.com:8020/iop/apps/4.3.0.0-0000/spark2/spark2-iop-yarn-archive.tar.gz#__spark__.jar

  command:
    {{JAVA_HOME}}/bin/java \ 
      -server \ 
      -Xmx1024m \ 
      -Djava.io.tmpdir={{PWD}}/tmp \ 
      -Dspark.yarn.app.container.log.dir=<LOG_DIR> \ 
      -XX:OnOutOfMemoryError='kill %p' \ 
      org.apache.spark.executor.CoarseGrainedExecutorBackend \ 
      --driver-url \ 
      spark://CoarseGrainedScheduler@172.16.191.74:41447 \ 
      --executor-id \ 
      <executorId> \ 
      --hostname \ 
      <hostname> \ 
      --cores \ 
      1 \ 
      --app-id \ 
      application_1493660524300_0103 \ 
      --user-class-path \ 
      file:$PWD/__app__.jar \ 
      1><LOG_DIR>/stdout \ 
      2><LOG_DIR>/stderr

  resources:

===============================================================================
17/05/03 16:45:45 INFO RMProxy: Connecting to ResourceManager at 
abc1.fyre.ibm.com/172.16.171.219:8030
17/05/03 16:45:45 INFO YarnRMClient: Registering the ApplicationMaster
17/05/03 16:45:45 INFO YarnAllocator: Will request 2 executor container(s), 
each with 1 core(s) and 1408 MB memory (including 384 MB of overhead)
17/05/03 16:45:45 INFO YarnAllocator: Submitted 2 unlocalized container 
requests.
17/05/03 16:45:45 INFO AMRMClientImpl: Received new token for : 
abc3.fyre.ibm.com:45454
17/05/03 16:45:45 INFO YarnAllocator: Launching container 
container_1493660524300_0103_02_000002 on host abc3.fyre.ibm.com
17/05/03 16:45:45 INFO YarnAllocator: Received 1 containers from YARN, 
launching executors on 1 of them.
17/05/03 16:45:45 INFO ApplicationMaster: Started progress reporter thread with 
(heartbeat : 3000, initial allocation : 200) intervals
17/05/03 16:45:45 INFO YarnAllocator: Will request 1 executor container(s), 
each with 1 core(s) and 1408 MB memory (including 384 MB of overhead)
17/05/03 16:45:45 INFO YarnAllocator: Submitted 1 unlocalized container 
requests.
17/05/03 16:45:45 INFO ContainerManagementProtocolProxy: 
yarn.client.max-cached-nodemanagers-proxies : 0
17/05/03 16:45:45 INFO ContainerManagementProtocolProxy: Opening proxy : 
abc3.fyre.ibm.com:45454
17/05/03 16:45:45 INFO YarnAllocator: Canceling requests for 1 executor 
container(s) to have a new desired total 2 executors.
17/05/03 16:45:45 INFO YarnAllocator: Launching container 
container_1493660524300_0103_02_000003 on host abc3.fyre.ibm.com
17/05/03 16:45:45 INFO YarnAllocator: Received 1 containers from YARN, 
launching executors on 1 of them.
17/05/03 16:45:45 INFO ContainerManagementProtocolProxy: 
yarn.client.max-cached-nodemanagers-proxies : 0
17/05/03 16:45:45 INFO ContainerManagementProtocolProxy: Opening proxy : 
abc3.fyre.ibm.com:45454
17/05/03 16:45:45 INFO YarnAllocator: Completed container 
container_1493660524300_0103_02_000002 on host: abc3.fyre.ibm.com (state: 
COMPLETE, exit status: 1)
17/05/03 16:45:45 WARN YarnAllocator: Container marked as failed: 
container_1493660524300_0103_02_000002 on host: abc3.fyre.ibm.com. Exit status: 
1. Diagnostics: Exception from container-launch.
Container id: container_1493660524300_0103_02_000002
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
        at org.apache.hadoop.util.Shell.run(Shell.java:479)
        at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
        at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
        at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

17/05/03 16:45:45 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Container 
marked as failed: container_1493660524300_0103_02_000002 on host: 
abc3.fyre.ibm.com. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1493660524300_0103_02_000002
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
        at org.apache.hadoop.util.Shell.run(Shell.java:479)
        at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
        at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
        at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

17/05/03 16:45:45 INFO BlockManagerMaster: Removal of executor 1 requested
17/05/03 16:45:45 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asked to remove 
non-existent executor 1
17/05/03 16:45:45 INFO BlockManagerMasterEndpoint: Trying to remove executor 1 
from BlockManagerMaster.
17/05/03 16:45:48 INFO YarnAllocator: Will request 1 executor container(s), 
each with 1 core(s) and 1408 MB memory (including 384 MB of overhead)
17/05/03 16:45:48 INFO YarnAllocator: Submitted 1 unlocalized container 
requests.
17/05/03 16:45:48 INFO YarnAllocator: Completed container 
container_1493660524300_0103_02_000003 on host: abc3.fyre.ibm.com (state: 
COMPLETE, exit status: 1)
17/05/03 16:45:48 WARN YarnAllocator: Container marked as failed: 
container_1493660524300_0103_02_000003 on host: abc3.fyre.ibm.com. Exit status: 
1. Diagnostics: Exception from container-launch.
Container id: container_1493660524300_0103_02_000003
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
        at org.apache.hadoop.util.Shell.run(Shell.java:479)
        at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
        at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
        at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

17/05/03 16:45:48 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Container 
marked as failed: container_1493660524300_0103_02_000003 on host: 
abc3.fyre.ibm.com. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1493660524300_0103_02_000003
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
        at org.apache.hadoop.util.Shell.run(Shell.java:479)
        at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
        at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
        at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

17/05/03 16:45:48 INFO BlockManagerMaster: Removal of executor 2 requested
17/05/03 16:45:48 INFO BlockManagerMasterEndpoint: Trying to remove executor 2 
from BlockManagerMaster.
17/05/03 16:45:48 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asked to remove 
non-existent executor 2
17/05/03 16:45:49 INFO YarnAllocator: Will request 1 executor container(s), 
each with 1 core(s) and 1408 MB memory (including 384 MB of overhead)
17/05/03 16:45:49 INFO YarnAllocator: Submitted 1 unlocalized container 
requests.
17/05/03 16:45:49 INFO AMRMClientImpl: Received new token for : 
abc4.fyre.ibm.com:45454
17/05/03 16:45:49 INFO YarnAllocator: Launching container 
container_1493660524300_0103_02_000004 on host abc4.fyre.ibm.com
17/05/03 16:45:49 INFO YarnAllocator: Received 1 containers from YARN, 
launching executors on 1 of them.
17/05/03 16:45:49 INFO ContainerManagementProtocolProxy: 
yarn.client.max-cached-nodemanagers-proxies : 0
17/05/03 16:45:49 INFO ContainerManagementProtocolProxy: Opening proxy : 
abc4.fyre.ibm.com:45454
17/05/03 16:45:49 INFO AMRMClientImpl: Received new token for : 
abc2.fyre.ibm.com:45454
17/05/03 16:45:49 INFO YarnAllocator: Launching container 
container_1493660524300_0103_02_000005 on host abc2.fyre.ibm.com
17/05/03 16:45:49 INFO YarnAllocator: Received 2 containers from YARN, 
launching executors on 1 of them.
17/05/03 16:45:49 INFO YarnAllocator: Completed container 
container_1493660524300_0103_02_000004 on host: abc4.fyre.ibm.com (state: 
COMPLETE, exit status: 1)
17/05/03 16:45:49 WARN YarnAllocator: Container marked as failed: 
container_1493660524300_0103_02_000004 on host: abc4.fyre.ibm.com. Exit status: 
1. Diagnostics: Exception from container-launch.
Container id: container_1493660524300_0103_02_000004
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
        at org.apache.hadoop.util.Shell.run(Shell.java:479)
        at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
        at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
        at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

17/05/03 16:45:49 INFO ContainerManagementProtocolProxy: 
yarn.client.max-cached-nodemanagers-proxies : 0
17/05/03 16:45:49 WARN YarnSchedulerBackend$YarnSchedulerEndpoint: Container 
marked as failed: container_1493660524300_0103_02_000004 on host: 
abc4.fyre.ibm.com. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_1493660524300_0103_02_000004
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
        at org.apache.hadoop.util.Shell.run(Shell.java:479)
        at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
        at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
        at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1

17/05/03 16:45:49 INFO BlockManagerMaster: Removal of executor 3 requested
17/05/03 16:45:49 INFO YarnSchedulerBackend$YarnDriverEndpoint: Asked to remove 
non-existent executor 3
17/05/03 16:45:49 INFO BlockManagerMasterEndpoint: Trying to remove executor 3 
from BlockManagerMaster.
17/05/03 16:45:49 INFO ContainerManagementProtocolProxy: Opening proxy : 
abc2.fyre.ibm.com:45454
17/05/03 16:45:52 INFO ApplicationMaster: Final app status: FAILED, exitCode: 
11, (reason: Max number of executor failures (3) reached)
17/05/03 16:45:52 INFO ApplicationMaster: Unregistering ApplicationMaster with 
FAILED (diag message: Max number of executor failures (3) reached)
17/05/03 16:45:52 INFO AMRMClientImpl: Waiting for application to be 
successfully unregistered.
17/05/03 16:45:52 INFO AMRMClientImpl: Waiting for application to be 
successfully unregistered.
17/05/03 16:45:52 INFO AMRMClientImpl: Waiting for application to be 
successfully unregistered.
17/05/03 16:45:52 INFO AMRMClientImpl: Waiting for application to be 
successfully unregistered.
17/05/03 16:45:53 ERROR Utils: Uncaught exception in thread Thread-3
java.lang.IllegalArgumentException: Can not create a Path from a null string
        at org.apache.hadoop.fs.Path.checkPathArg(Path.java:122)
        at org.apache.hadoop.fs.Path.<init>(Path.java:134)
        at 
org.apache.spark.deploy.yarn.ApplicationMaster.org$apache$spark$deploy$yarn$ApplicationMaster$$cleanupStagingDir(ApplicationMaster.scala:539)
        at 
org.apache.spark.deploy.yarn.ApplicationMaster$$anonfun$run$1.apply$mcV$sp(ApplicationMaster.scala:233)
        at 
org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)
        at 
org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ShutdownHookManager.scala:188)
        at 
org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
        at 
org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1$$anonfun$apply$mcV$sp$1.apply(ShutdownHookManager.scala:188)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1951)
        at 
org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply$mcV$sp(ShutdownHookManager.scala:188)
        at 
org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
        at 
org.apache.spark.util.SparkShutdownHookManager$$anonfun$runAll$1.apply(ShutdownHookManager.scala:188)
        at scala.util.Try$.apply(Try.scala:192)
        at 
org.apache.spark.util.SparkShutdownHookManager.runAll(ShutdownHookManager.scala:188)
        at 
org.apache.spark.util.SparkShutdownHookManager$$anon$2.run(ShutdownHookManager.scala:178)
        at 
org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
17/05/03 16:45:53 INFO DiskBlockManager: Shutdown hook called
17/05/03 16:45:53 INFO ShutdownHookManager: Shutdown hook called
17/05/03 16:45:53 INFO ShutdownHookManager: Deleting directory 
/hadoop/yarn/local/usercache/hdfs/appcache/application_1493660524300_0103/spark-c1b259d5-1253-4093-bca8-7d1801c2c83a/userFiles-2946a4f7-3e80-4168-9176-008ea99c61b4
17/05/03 16:45:53 INFO ShutdownHookManager: Deleting directory 
/hadoop/yarn/local/usercache/hdfs/appcache/application_1493660524300_0103/spark-c1b259d5-1253-4093-bca8-7d1801c2c83a
[hdfs@abc1 ~]$ ID=1493660524300_0103
[hdfs@abc1 ~]$ yarn logs -applicationId application_"$ID" -containerId 
container_"$ID"_02_000004 -nodeAddress abc4.fyre.ibm.com:45454
17/05/03 16:55:23 INFO impl.TimelineClientImpl: Timeline service address: 
http://abc2.fyre.ibm.com:8188/ws/v1/timeline/
17/05/03 16:55:23 INFO client.RMProxy: Connecting to ResourceManager at 
abc1.fyre.ibm.com/172.16.171.219:8050
17/05/03 16:55:24 INFO zlib.ZlibFactory: Successfully loaded & initialized 
native-zlib library
17/05/03 16:55:24 INFO compress.CodecPool: Got brand-new decompressor [.deflate]
LogType:stderr
Log Upload Time:Wed May 03 16:45:54 -0700 2017
LogLength:96
Log Contents:
Error: Could not find or load main class 
org.apache.spark.executor.CoarseGrainedExecutorBackend
End of LogType:stderr

LogType:stdout
Log Upload Time:Wed May 03 16:45:54 -0700 2017
LogLength:0
Log Contents:
End of LogType:stdout

[hdfs@abc1 ~]$ 
Log Type: stderr
Log Upload Time: Tue May 02 10:28:57 -0700 2017
Log Length: 7472
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/hadoop/yarn/local/filecache/11/spark2-iop-yarn-archive.tar.gz/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/iop/4.3.0.0-0000/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
17/05/02 10:28:43 INFO SignalUtils: Registered signal handler for TERM
17/05/02 10:28:43 INFO SignalUtils: Registered signal handler for HUP
17/05/02 10:28:43 INFO SignalUtils: Registered signal handler for INT
17/05/02 10:28:44 INFO ApplicationMaster: Preparing Local resources
17/05/02 10:28:45 INFO ApplicationMaster: ApplicationAttemptId: 
appattempt_1493660524300_0067_000001
17/05/02 10:28:45 INFO SecurityManager: Changing view acls to: yarn,hdfs
17/05/02 10:28:45 INFO SecurityManager: Changing modify acls to: yarn,hdfs
17/05/02 10:28:45 INFO SecurityManager: Changing view acls groups to: 
17/05/02 10:28:45 INFO SecurityManager: Changing modify acls groups to: 
17/05/02 10:28:45 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users  with view permissions: Set(yarn, hdfs); 
groups with view permissions: Set(); users  with modify permissions: Set(yarn, 
hdfs); groups with modify permissions: Set()
17/05/02 10:28:46 INFO ApplicationMaster: Waiting for Spark driver to be 
reachable.
17/05/02 10:28:46 INFO ApplicationMaster: Driver now available: 
172.16.171.219:54998
17/05/02 10:28:46 INFO TransportClientFactory: Successfully created connection 
to /172.16.171.219:54998 after 121 ms (0 ms spent in bootstraps)
17/05/02 10:28:46 INFO ApplicationMaster$AMEndpoint: Add WebUI Filter. 
AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS
 -> abc1.fyre.ibm.com, PROXY_URI_BASES -> 
http://abc1.fyre.ibm.com:8088/proxy/application_1493660524300_0067),/proxy/application_1493660524300_0067)
17/05/02 10:28:46 INFO ApplicationMaster: 
===============================================================================
YARN executor launch context:
  env:
    CLASSPATH -> 
{{PWD}}<CPS>{{PWD}}/__spark_conf__<CPS>{{PWD}}/__spark_libs__/*<CPS>$HADOOP_CONF_DIR<CPS>/usr/iop/current/hadoop-client/*<CPS>/usr/iop/current/hadoop-client/lib/*<CPS>/usr/iop/current/hadoop-hdfs-client/*<CPS>/usr/iop/current/hadoop-hdfs-client/lib/*<CPS>/usr/iop/current/hadoop-yarn-client/*<CPS>/usr/iop/current/hadoop-yarn-client/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/usr/iop/current/ext/hadoop/*<CPS>/etc/hadoop/conf/:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/iop/4.3.0.0/hadoop/lib/hadoop-lzo-0.6.0.4.3.0.0.jar:/etc/hadoop/conf/secure:/usr/lib/hadoop-lzo/lib/*:/usr/iop/current/ext/hadoop/*
    SPARK_YARN_STAGING_DIR -> 
hdfs://abc1.fyre.ibm.com:8020/user/hdfs/.sparkStaging/application_1493660524300_0067
    SPARK_USER -> hdfs
    SPARK_YARN_MODE -> true

  command:
    
LD_LIBRARY_PATH="/usr/iop/current/hadoop-client/lib/native:/usr/iop/current/hadoop-client/lib/native/Linux-amd64-64:$LD_LIBRARY_PATH"
 \ 
      {{JAVA_HOME}}/bin/java \ 
      -server \ 
      -Xmx1024m \ 
      -Djava.io.tmpdir={{PWD}}/tmp \ 
      '-Dspark.driver.port=54998' \ 
      '-Dspark.history.ui.port=18081' \ 
      '-Dspark.thriftserver.ui.port=4038' \ 
      -Dspark.yarn.app.container.log.dir=<LOG_DIR> \ 
      -XX:OnOutOfMemoryError='kill %p' \ 
      org.apache.spark.executor.CoarseGrainedExecutorBackend \ 
      --driver-url \ 
      spark://CoarseGrainedScheduler@172.16.171.219:54998 \ 
      --executor-id \ 
      <executorId> \ 
      --hostname \ 
      <hostname> \ 
      --cores \ 
      1 \ 
      --app-id \ 
      application_1493660524300_0067 \ 
      --user-class-path \ 
      file:$PWD/__app__.jar \ 
      1><LOG_DIR>/stdout \ 
      2><LOG_DIR>/stderr

  resources:
    __spark_libs__ -> resource { scheme: "hdfs" host: "abc1.fyre.ibm.com" port: 
8020 file: "/iop/apps/4.3.0.0-0000/spark2/spark2-iop-yarn-archive.tar.gz" } 
size: 225643850 timestamp: 1493660538918 type: ARCHIVE visibility: PUBLIC
    __spark_conf__ -> resource { scheme: "hdfs" host: "abc1.fyre.ibm.com" port: 
8020 file: 
"/user/hdfs/.sparkStaging/application_1493660524300_0067/__spark_conf__.zip" } 
size: 81162 timestamp: 1493746121596 type: ARCHIVE visibility: PRIVATE

===============================================================================
17/05/02 10:28:46 INFO RMProxy: Connecting to ResourceManager at 
abc1.fyre.ibm.com/172.16.171.219:8030
17/05/02 10:28:46 INFO YarnRMClient: Registering the ApplicationMaster
17/05/02 10:28:46 INFO YarnAllocator: Will request 2 executor container(s), 
each with 1 core(s) and 1408 MB memory (including 384 MB of overhead)
17/05/02 10:28:46 INFO YarnAllocator: Submitted 2 unlocalized container 
requests.
17/05/02 10:28:46 INFO ApplicationMaster: Started progress reporter thread with 
(heartbeat : 3000, initial allocation : 200) intervals
17/05/02 10:28:47 INFO AMRMClientImpl: Received new token for : 
abc3.fyre.ibm.com:45454
17/05/02 10:28:47 INFO YarnAllocator: Launching container 
container_1493660524300_0067_01_000002 on host abc3.fyre.ibm.com
17/05/02 10:28:47 INFO YarnAllocator: Received 1 containers from YARN, 
launching executors on 1 of them.
17/05/02 10:28:47 INFO ContainerManagementProtocolProxy: 
yarn.client.max-cached-nodemanagers-proxies : 0
17/05/02 10:28:47 INFO ContainerManagementProtocolProxy: Opening proxy : 
abc3.fyre.ibm.com:45454
17/05/02 10:28:47 INFO AMRMClientImpl: Received new token for : 
abc4.fyre.ibm.com:45454
17/05/02 10:28:47 INFO YarnAllocator: Launching container 
container_1493660524300_0067_01_000003 on host abc4.fyre.ibm.com
17/05/02 10:28:47 INFO YarnAllocator: Received 1 containers from YARN, 
launching executors on 1 of them.
17/05/02 10:28:47 INFO ContainerManagementProtocolProxy: 
yarn.client.max-cached-nodemanagers-proxies : 0
17/05/02 10:28:47 INFO ContainerManagementProtocolProxy: Opening proxy : 
abc4.fyre.ibm.com:45454
17/05/02 10:28:50 INFO AMRMClientImpl: Received new token for : 
abc2.fyre.ibm.com:45454
17/05/02 10:28:50 INFO YarnAllocator: Received 1 containers from YARN, 
launching executors on 0 of them.
17/05/02 10:28:55 INFO YarnAllocator: Driver requested a total number of 0 
executor(s).
17/05/02 10:28:55 INFO ApplicationMaster$AMEndpoint: Driver terminated or 
disconnected! Shutting down. 172.16.171.219:54998
17/05/02 10:28:55 INFO ApplicationMaster: Final app status: SUCCEEDED, 
exitCode: 0
17/05/02 10:28:55 INFO ApplicationMaster$AMEndpoint: Driver terminated or 
disconnected! Shutting down. 172.16.171.219:54998
17/05/02 10:28:55 INFO ApplicationMaster: Unregistering ApplicationMaster with 
SUCCEEDED
17/05/02 10:28:55 INFO AMRMClientImpl: Waiting for application to be 
successfully unregistered.
17/05/02 10:28:55 INFO ApplicationMaster: Deleting staging directory 
hdfs://abc1.fyre.ibm.com:8020/user/hdfs/.sparkStaging/application_1493660524300_0067
17/05/02 10:28:55 INFO ShutdownHookManager: Shutdown hook called
Log Type: stderr
Log Upload Time: Tue May 02 12:26:45 -0700 2017
Log Length: 11159
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/hadoop/yarn/local/filecache/11/spark-assembly.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/iop/4.2.1.0/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
17/05/02 12:26:38 INFO ApplicationMaster: Registered signal handlers for [TERM, 
HUP, INT]
17/05/02 12:26:38 WARN SparkConf: The configuration key 
'spark.yarn.applicationMaster.waitTries' has been deprecated as of Spark 1.3 
and and may be removed in the future. Please use the new key 
'spark.yarn.am.waitTime' instead.
17/05/02 12:26:38 INFO ApplicationMaster: ApplicationAttemptId: 
appattempt_1493752885968_0005_000001
17/05/02 12:26:39 INFO SecurityManager: Changing view acls to: yarn,hdfs
17/05/02 12:26:39 INFO SecurityManager: Changing modify acls to: yarn,hdfs
17/05/02 12:26:39 INFO SecurityManager: SecurityManager: authentication 
disabled; ui acls disabled; users with view permissions: Set(yarn, hdfs); users 
with modify permissions: Set(yarn, hdfs)
17/05/02 12:26:39 WARN SparkConf: The configuration key 
'spark.yarn.applicationMaster.waitTries' has been deprecated as of Spark 1.3 
and and may be removed in the future. Please use the new key 
'spark.yarn.am.waitTime' instead.
17/05/02 12:26:39 WARN SparkConf: The configuration key 
'spark.yarn.applicationMaster.waitTries' has been deprecated as of Spark 1.3 
and and may be removed in the future. Please use the new key 
'spark.yarn.am.waitTime' instead.
17/05/02 12:26:39 INFO ApplicationMaster: Waiting for Spark driver to be 
reachable.
17/05/02 12:26:39 INFO ApplicationMaster: Driver now available: 
172.16.152.36:32866
17/05/02 12:26:39 WARN SparkConf: The configuration key 
'spark.yarn.applicationMaster.waitTries' has been deprecated as of Spark 1.3 
and and may be removed in the future. Please use the new key 
'spark.yarn.am.waitTime' instead.
17/05/02 12:26:39 INFO ApplicationMaster$AMEndpoint: Add WebUI Filter. 
AddWebUIFilter(org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter,Map(PROXY_HOSTS
 -> mhx1.fyre.ibm.com, PROXY_URI_BASES -> 
http://mhx1.fyre.ibm.com:8088/proxy/application_1493752885968_0005),/proxy/application_1493752885968_0005)
17/05/02 12:26:39 INFO RMProxy: Connecting to ResourceManager at 
mhx1.fyre.ibm.com/172.16.152.36:8030
17/05/02 12:26:39 INFO YarnRMClient: Registering the ApplicationMaster
17/05/02 12:26:39 INFO YarnAllocator: Will request 2 executor containers, each 
with 1 cores and 1408 MB memory including 384 MB overhead
17/05/02 12:26:39 INFO YarnAllocator: Container request (host: Any, capability: 
<memory:1408, vCores:1>)
17/05/02 12:26:39 INFO YarnAllocator: Container request (host: Any, capability: 
<memory:1408, vCores:1>)
17/05/02 12:26:39 INFO ApplicationMaster: Started progress reporter thread with 
(heartbeat : 5000, initial allocation : 200) intervals
17/05/02 12:26:40 INFO AMRMClientImpl: Received new token for : 
mhx2.fyre.ibm.com:45454
17/05/02 12:26:40 INFO AMRMClientImpl: Received new token for : 
mhx4.fyre.ibm.com:45454
17/05/02 12:26:40 INFO YarnAllocator: Launching container 
container_1493752885968_0005_01_000002 for on host mhx2.fyre.ibm.com
17/05/02 12:26:40 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: 
spark://CoarseGrainedScheduler@172.16.152.36:32866,  executorHostname: 
mhx2.fyre.ibm.com
17/05/02 12:26:40 INFO YarnAllocator: Launching container 
container_1493752885968_0005_01_000003 for on host mhx4.fyre.ibm.com
17/05/02 12:26:40 INFO ExecutorRunnable: Starting Executor Container
17/05/02 12:26:40 INFO YarnAllocator: Launching ExecutorRunnable. driverUrl: 
spark://CoarseGrainedScheduler@172.16.152.36:32866,  executorHostname: 
mhx4.fyre.ibm.com
17/05/02 12:26:40 INFO ExecutorRunnable: Starting Executor Container
17/05/02 12:26:40 INFO YarnAllocator: Received 2 containers from YARN, 
launching executors on 2 of them.
17/05/02 12:26:40 INFO ContainerManagementProtocolProxy: 
yarn.client.max-cached-nodemanagers-proxies : 0
17/05/02 12:26:40 INFO ExecutorRunnable: Setting up ContainerLaunchContext
17/05/02 12:26:40 INFO ContainerManagementProtocolProxy: 
yarn.client.max-cached-nodemanagers-proxies : 0
17/05/02 12:26:40 INFO ExecutorRunnable: Setting up ContainerLaunchContext
17/05/02 12:26:40 INFO ExecutorRunnable: Preparing Local resources
17/05/02 12:26:40 INFO ExecutorRunnable: Preparing Local resources
17/05/02 12:26:40 INFO ExecutorRunnable: Prepared Local resources 
Map(__spark__.jar -> resource { scheme: "hdfs" host: "mhx1.fyre.ibm.com" port: 
8020 file: "/iop/apps/4.2.1.0/spark/jars/spark-assembly.jar" } size: 198422475 
timestamp: 1493752890532 type: FILE visibility: PUBLIC)
17/05/02 12:26:40 INFO ExecutorRunnable: Prepared Local resources 
Map(__spark__.jar -> resource { scheme: "hdfs" host: "mhx1.fyre.ibm.com" port: 
8020 file: "/iop/apps/4.2.1.0/spark/jars/spark-assembly.jar" } size: 198422475 
timestamp: 1493752890532 type: FILE visibility: PUBLIC)
17/05/02 12:26:40 INFO ExecutorRunnable: 
===============================================================================
YARN executor launch context:
  env:
    CLASSPATH -> 
{{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>/etc/hadoop/conf<CPS>/usr/iop/current/hadoop-client/*<CPS>/usr/iop/current/hadoop-client/lib/*<CPS>/usr/iop/current/hadoop-hdfs-client/*<CPS>/usr/iop/current/hadoop-hdfs-client/lib/*<CPS>/usr/iop/current/hadoop-yarn-client/*<CPS>/usr/iop/current/hadoop-yarn-client/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/etc/hadoop/conf/:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/iop//hadoop/lib/hadoop-lzo-0.5.1.jar:/etc/hadoop/conf/secure:/usr/lib/hadoop-lzo/lib/*
    SPARK_YARN_USER_ENV -> 
JAVA_LIBRARY_PATH=:/usr/iop/current/hadoop-client/lib/native,LD_LIBRARY_PATH=:/usr/iop/current/hadoop-client/lib/native
    SPARK_LOG_URL_STDERR -> 
http://mhx4.fyre.ibm.com:8042/node/containerlogs/container_1493752885968_0005_01_000003/hdfs/stderr?start=-4096
    SPARK_YARN_CACHE_FILES_FILE_SIZES -> 198422475
    SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1493752885968_0005
    SPARK_YARN_CACHE_FILES_VISIBILITIES -> PUBLIC
    SPARK_USER -> hdfs
    SPARK_YARN_MODE -> true
    JAVA_LIBRARY_PATH -> :/usr/iop/current/hadoop-client/lib/native
    SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1493752890532
    LD_LIBRARY_PATH -> :/usr/iop/current/hadoop-client/lib/native
    SPARK_LOG_URL_STDOUT -> 
http://mhx4.fyre.ibm.com:8042/node/containerlogs/container_1493752885968_0005_01_000003/hdfs/stdout?start=-4096
    SPARK_YARN_CACHE_FILES -> 
hdfs://mhx1.fyre.ibm.com:8020/iop/apps/4.2.1.0/spark/jars/spark-assembly.jar#__spark__.jar

  command:
    {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m 
-Xmx1024m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.history.ui.port=18080' 
'-Dspark.driver.port=32866' -Dspark.yarn.app.container.log.dir=<LOG_DIR> 
org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url 
spark://CoarseGrainedScheduler@172.16.152.36:32866 --executor-id 2 --hostname 
mhx4.fyre.ibm.com --cores 1 --app-id application_1493752885968_0005 
--user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
      
17/05/02 12:26:40 INFO ExecutorRunnable: 
===============================================================================
YARN executor launch context:
  env:
    CLASSPATH -> 
{{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>/etc/hadoop/conf<CPS>/usr/iop/current/hadoop-client/*<CPS>/usr/iop/current/hadoop-client/lib/*<CPS>/usr/iop/current/hadoop-hdfs-client/*<CPS>/usr/iop/current/hadoop-hdfs-client/lib/*<CPS>/usr/iop/current/hadoop-yarn-client/*<CPS>/usr/iop/current/hadoop-yarn-client/lib/*<CPS>/usr/lib/hadoop-lzo/lib/*<CPS>/etc/hadoop/conf/:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/iop//hadoop/lib/hadoop-lzo-0.5.1.jar:/etc/hadoop/conf/secure:/usr/lib/hadoop-lzo/lib/*
    SPARK_YARN_USER_ENV -> 
JAVA_LIBRARY_PATH=:/usr/iop/current/hadoop-client/lib/native,LD_LIBRARY_PATH=:/usr/iop/current/hadoop-client/lib/native
    SPARK_LOG_URL_STDERR -> 
http://mhx2.fyre.ibm.com:8042/node/containerlogs/container_1493752885968_0005_01_000002/hdfs/stderr?start=-4096
    SPARK_YARN_CACHE_FILES_FILE_SIZES -> 198422475
    SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1493752885968_0005
    SPARK_YARN_CACHE_FILES_VISIBILITIES -> PUBLIC
    SPARK_USER -> hdfs
    SPARK_YARN_MODE -> true
    JAVA_LIBRARY_PATH -> :/usr/iop/current/hadoop-client/lib/native
    SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1493752890532
    LD_LIBRARY_PATH -> :/usr/iop/current/hadoop-client/lib/native
    SPARK_LOG_URL_STDOUT -> 
http://mhx2.fyre.ibm.com:8042/node/containerlogs/container_1493752885968_0005_01_000002/hdfs/stdout?start=-4096
    SPARK_YARN_CACHE_FILES -> 
hdfs://mhx1.fyre.ibm.com:8020/iop/apps/4.2.1.0/spark/jars/spark-assembly.jar#__spark__.jar

  command:
    {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m 
-Xmx1024m -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.history.ui.port=18080' 
'-Dspark.driver.port=32866' -Dspark.yarn.app.container.log.dir=<LOG_DIR> 
org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url 
spark://CoarseGrainedScheduler@172.16.152.36:32866 --executor-id 1 --hostname 
mhx2.fyre.ibm.com --cores 1 --app-id application_1493752885968_0005 
--user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
      
17/05/02 12:26:40 INFO ContainerManagementProtocolProxy: Opening proxy : 
mhx2.fyre.ibm.com:45454
17/05/02 12:26:40 INFO ContainerManagementProtocolProxy: Opening proxy : 
mhx4.fyre.ibm.com:45454
17/05/02 12:26:44 INFO ApplicationMaster$AMEndpoint: Driver terminated or 
disconnected! Shutting down. mhx1.fyre.ibm.com:32866
17/05/02 12:26:44 INFO ApplicationMaster: Final app status: SUCCEEDED, 
exitCode: 0
17/05/02 12:26:44 INFO ApplicationMaster: Unregistering ApplicationMaster with 
SUCCEEDED
17/05/02 12:26:44 INFO AMRMClientImpl: Waiting for application to be 
successfully unregistered.
17/05/02 12:26:44 INFO ApplicationMaster: Deleting staging directory 
.sparkStaging/application_1493752885968_0005
17/05/02 12:26:44 INFO ShutdownHookManager: Shutdown hook called
---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to