there is no application logs of spark job. i think it's because the job still on running state. i've tune it like your earlier mail for tuning too. hadoop job was success using your tuning suggestion config. but still not worked on spark. is there any other configuration that i must set?? especially in spark configuration

On 15/02/16 15:53, Jaydeep Vishwakarma wrote:
By the log snap I can see that Launcher is able to launch spark job.
Please share the application logs of spark job.
 I am also suspecting 2 cores and lack memory might creating problem.
You may wish to tune your cluster. Please refer my earlier mail for tuning.

On Mon, Feb 15, 2016 at 2:04 PM, tkg_cangkul <[email protected] <mailto:[email protected]>> wrote:

    hi jaydeep,
    thx for your reply.

    it has been succeed to submit job. but the proccess stuck at
    running state.
    the RM memory that i've set is 5GB. and i has been separate the
    queue mapred job & oozie launcher job.

    RM
    i've succees to submit hadoop job with this config and it's
    succeed. but when i try submit spark job it was stuck on that
    state. is there any missed configuration? pls help. FYI. this is
    just a single node machine.


    On 15/02/16 14:05, Jaydeep Vishwakarma wrote:
    Can you check error you have in app master?

    On Mon, Feb 15, 2016 at 12:19 PM, tkg_cangkul<[email protected]>  
<mailto:[email protected]>  wrote:

    i try to subbmit spark job with oozie but it was failed with this message.

    Main class [org.apache.oozie.action.hadoop.SparkMain], exit code [1]

    is it any wrong configuration from me?
    this is my xml conf.

    <workflow-app xmlns='uri:oozie:workflow:0.5' name='tkg-cangkul'>
         <start to='spark-node' />
         <action name='spark-node'>
             <spark xmlns="uri:oozie:spark-action:0.1">
                 <job-tracker>${jobTracker}</job-tracker>
                 <name-node>${nameNode}</name-node>
                     <configuration>
                             <property>
    <name>mapred.job.queue.name  <http://mapred.job.queue.name></name>
                                     <value>default</value>
                             </property>
                             <property>
    <name>oozie.launcher.mapred.job.queue.name  
<http://oozie.launcher.mapred.job.queue.name></name>
                                     <value>user1</value>
                             </property>
                     </configuration>
                 <master>yarn-cluster</master>
                 <name>Spark</name>
                 <class>cobaSpark.pack</class>
    <jar>hdfs://localhost:8020/user/apps/cobaSpark.jar</jar>
                 <arg>/user/apps/sample1.txt</arg>
                 <arg>/user/apps/oozie-spark/out</arg>
             </spark>
             <ok to="end" />
             <error to="fail" />
         </action>
         <kill name="fail">
             <message>Workflow failed, error
                 message[${wf:errorMessage(wf:lastErrorNode())}]
             </message>
         </kill>
         <end name='end' />
    </workflow-app>





_____________________________________________________________
The information contained in this communication is intended solely for the use of the individual or entity to whom it is addressed and others authorized to receive it. It may contain confidential or legally privileged information. If you are not the intended recipient you are hereby notified that any disclosure, copying, distribution or taking any action in reliance on the contents of this information is strictly prohibited and may be unlawful. If you have received this communication in error, please notify us immediately by responding to this email and then delete it from your system. The firm is neither liable for the proper and complete transmission of the information contained in this communication nor for any delay in its receipt.

Reply via email to