hi jaydeep,
thx for your reply.
it has been succeed to submit job. but the proccess stuck at running state.
the RM memory that i've set is 5GB. and i has been separate the queue
mapred job & oozie launcher job.
RM
i've succees to submit hadoop job with this config and it's succeed. but
when i try submit spark job it was stuck on that state. is there any
missed configuration? pls help. FYI. this is just a single node machine.
On 15/02/16 14:05, Jaydeep Vishwakarma wrote:
Can you check error you have in app master?
On Mon, Feb 15, 2016 at 12:19 PM, tkg_cangkul <[email protected]> wrote:
i try to subbmit spark job with oozie but it was failed with this message.
Main class [org.apache.oozie.action.hadoop.SparkMain], exit code [1]
is it any wrong configuration from me?
this is my xml conf.
<workflow-app xmlns='uri:oozie:workflow:0.5' name='tkg-cangkul'>
<start to='spark-node' />
<action name='spark-node'>
<spark xmlns="uri:oozie:spark-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>default</value>
</property>
<property>
<name>oozie.launcher.mapred.job.queue.name</name>
<value>user1</value>
</property>
</configuration>
<master>yarn-cluster</master>
<name>Spark</name>
<class>cobaSpark.pack</class>
<jar>hdfs://localhost:8020/user/apps/cobaSpark.jar</jar>
<arg>/user/apps/sample1.txt</arg>
<arg>/user/apps/oozie-spark/out</arg>
</spark>
<ok to="end" />
<error to="fail" />
</action>
<kill name="fail">
<message>Workflow failed, error
message[${wf:errorMessage(wf:lastErrorNode())}]
</message>
</kill>
<end name='end' />
</workflow-app>