I use this command to summit *spark application* to *yarn cluster*

export YARN_CONF_DIR=conf
bin/spark-submit --class "Mining"
  --master yarn-cluster
  --executor-memory 512m ./target/scala-2.10/mining-assembly-0.1.jar

*In Web UI, it stuck on* UNDEFINED

[image: enter image description here]

*In console, it stuck to*

<code>14/11/12 16:37:55 INFO yarn.Client: Application report from ASM:
     application identifier: application_1415704754709_0017
     appId: 17
     clientToAMToken: null
     appDiagnostics:
     appMasterHost: example.com
     appQueue: default
     appMasterRpcPort: 0
     appStartTime: 1415784586000
     yarnAppState: RUNNING
     distributedFinalState: UNDEFINED
     appTrackingUrl:
http://example.com:8088/proxy/application_1415704754709_0017/
     appUser: rain
</code>

Update:

Dive into Logs for container in Web UI
http://example.com:8042/node/containerlogs/container_1415704754709_0017_01_000001/rain/stderr/?start=0,
I found this

14/11/12 02:11:47 WARN YarnClusterScheduler: Initial job has not accepted
any resources; check your cluster UI to ensure that workers are registered
and have sufficient memory
14/11/12 02:11:47 DEBUG Client: IPC Client (1211012646) connection
tospark.mvs.vn/192.168.64.142:8030 from rain sending #24418
14/11/12 02:11:47 DEBUG Client: IPC Client (1211012646) connection
tospark.mvs.vn/192.168.64.142:8030 from rain got value #24418

I found this problem have had solution here
http://hortonworks.com/hadoop-tutorial/using-apache-spark-hdp/

The Hadoop cluster must have sufficient memory for the request.

For example, submitting the following job with 1GB memory allocated for
executor and Spark driver fails with the above error in the HDP 2.1 Sandbox.
Reduce the memory asked for the executor and the Spark driver to 512m and
re-start the cluster.

I'm trying this solution and hopefully it will work

Reply via email to