That means there's no sufficient available resources in your yarn cluster.
Either you specify very large resource for your spark application or other
jobs occupy the yarn resources.
So you need to decrease the resource requested or kill other yarn
applications.

On Mon, Feb 15, 2016 at 5:59 PM, tkg_cangkul <[email protected]> wrote:

> this information that i have on stderr. do u have any suggestion to tuning
> the allocation memory?
>
>  cluster.YarnClusterScheduler: Initial job has not accepted any resources;
> check your cluster UI to ensure that workers are registered and have
> sufficient resources
>
>
>
> On 15/02/16 16:45, Jeff Zhang wrote:
>
>> Which file do you check ? You should check the file stderr
>>
>>
>>
>> On Mon, Feb 15, 2016 at 5:44 PM, tkg_cangkul <[email protected]>
>> wrote:
>>
>> yes you're right jeff. i'm sorry that's my mistake.
>>> i've checked the application log but there is no helpful information from
>>> that. only classpath and something like this:
>>>
>>> "Stage Infos":[{"Stage ID":0,"Stage Attempt ID":0,"Stage Name":"groupBy
>>> at
>>> pack.java:186","Number of Tasks":2,"RDD Info":[{"RDD
>>> ID":2,"Name":"2","Storage Level
>>> ":{"Use Disk":false,"Use Memory":false,"Use
>>> Tachyon":false,"Deserialized":false,"Replication":1},"Number of
>>> Partitions":2,"Number of Cached Partitions":0,"Memory Size":0,"Tachyon
>>> Size":0,"Disk Size":0},{"RDD ID":1,"Name":"1","Storage Leve
>>> l":{"Use Disk":false,"Use Memory":true,"Use
>>> Tachyon":false,"Deserialized":true,"Replication":1},"Number of
>>> Partitions":2,"Number of Cached Partitions":0,"Memory Size":0,"Tachyon
>>> Size":0,"Disk Size":0},{"RDD ID":0,"Name":"/user/apps/sample
>>> 1.txt","Storage Level":{"Use Disk":false,"Use Memory":false,"Use
>>> Tachyon":false,"Deserialized":false,"Replication":1},"Number of
>>> Partitions":2,"Number of Cached Partitions":0,"Memory Size":0,"Tachyon
>>> Size":0,"Disk Size":0}],"Details":"org
>>>
>>> .apache.spark.api.java.AbstractJavaRDDLike.groupBy(JavaRDDLike.scala:46)\ncobaSpark.pack.execute(pack.java:186)\ncobaSpark.pack.main(pack.java:115)\nsun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>> Method)\nsun.reflect.Nati
>>>
>>>
>>> veMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)\nsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\njava.lang.reflect.Method.invoke(Method.java:606)\norg.apache.spark.deploy.yarn.Applicati
>>>
>>> onMaster$$anon$2.run(ApplicationMaster.scala:480)","Accumulables":[]},{"Stage
>>> ID":1,"Stage Attempt ID":0,"Stage Name":"sortByKey at
>>> pack.java:190","Number of Tasks":2,"RDD Info":[{"RDD
>>> ID":6,"Name":"6","Storage Level":{"Use Disk":false
>>> ,"Use Memory":false,"Use
>>> Tachyon":false,"Deserialized":false,"Replication":1},"Number of
>>> Partitions":2,"Number of Cached Partitions":0,"Memory Size":0,"Tachyon
>>> Size":0,"Disk Size":0},{"RDD ID":4,"Name":"4","Storage Level":{"Use
>>> Disk":fals
>>> e,"Use Memory":false,"Use
>>> Tachyon":false,"Deserialized":false,"Replication":1},"Number of
>>> Partitions":2,"Number of Cached Partitions":0,"Memory Size":0,"Tachyon
>>> Size":0,"Disk Size":0},{"RDD ID":5,"Name":"5","Storage Level":{"Use
>>> Disk":fal
>>> se,"Use Memory":false,"Use
>>> Tachyon":false,"Deserialized":false,"Replication":1},"Number of
>>> Partitions":2,"Number of Cached Partitions":0,"Memory Size":0,"Tachyon
>>> Size":0,"Disk Size":0},{"RDD ID":3,"Name":"3","Storage Level":{"Use
>>> Disk":fa
>>> lse,"Use Memory":false,"Use
>>> Tachyon":false,"Deserialized":false,"Replication":1},"Number of
>>> Partitions":2,"Number of Cached Partitions":0,"Memory Size":0,"Tachyon
>>> Size":0,"Disk Size":0}],"Details":"
>>> org.apache.spark.api.java.JavaPairRDD.so
>>>
>>> rtByKey(JavaPairRDD.scala:873)\ncobaSpark.pack.execute(pack.java:190)\ncobaSpark.pack.main(pack.java:115)\nsun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>> Method)\nsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAc
>>>
>>>
>>> cessorImpl.java:57)\nsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\njava.lang.reflect.Method.invoke(Method.java:606)\norg.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.sca
>>> la:480)","Accumulables":[]}],"Stage IDs":[0,1],"Properties":{}}
>>>
>>>
>>>
>>> On 15/02/16 16:32, Jeff Zhang wrote:
>>>
>>> there is no application logs of spark job. i think it's because the job
>>>>
>>>>> still on running state.
>>>>>>
>>>>> Even the application is in running state, the log should exist too
>>>> unless
>>>> the application is in accepted state. Could you check the RM UI ?
>>>>
>>>> On Mon, Feb 15, 2016 at 5:18 PM, tkg_cangkul <[email protected]>
>>>> wrote:
>>>>
>>>> there is no application logs of spark job. i think it's because the job
>>>>
>>>>> still on running state.
>>>>> i've tune it like your earlier mail for tuning too. hadoop job was
>>>>> success
>>>>> using your tuning suggestion config. but still not worked on spark.
>>>>> is there any other configuration that i must set?? especially in spark
>>>>> configuration
>>>>>
>>>>>
>>>>> On 15/02/16 15:53, Jaydeep Vishwakarma wrote:
>>>>>
>>>>> By the log snap I can see that Launcher is able to launch spark job.
>>>>>
>>>>>> Please share the application logs of spark job.
>>>>>>    I am also suspecting 2 cores and lack memory might creating
>>>>>> problem.
>>>>>> You may wish to tune your cluster. Please refer my earlier mail for
>>>>>> tuning.
>>>>>>
>>>>>> On Mon, Feb 15, 2016 at 2:04 PM, tkg_cangkul <[email protected]
>>>>>> <mailto:[email protected]>> wrote:
>>>>>>
>>>>>>       hi jaydeep,
>>>>>>       thx for your reply.
>>>>>>
>>>>>>       it has been succeed to submit job. but the proccess stuck at
>>>>>>       running state.
>>>>>>       the RM memory that i've set is 5GB. and i has been separate the
>>>>>>       queue mapred job & oozie launcher job.
>>>>>>
>>>>>>       RM
>>>>>>       i've succees to submit hadoop job with this config and it's
>>>>>>       succeed. but when i try submit spark job it was stuck on that
>>>>>>       state. is there any missed configuration? pls help. FYI. this is
>>>>>>       just a single node machine.
>>>>>>
>>>>>>
>>>>>>       On 15/02/16 14:05, Jaydeep Vishwakarma wrote:
>>>>>>
>>>>>>       Can you check error you have in app master?
>>>>>>
>>>>>>>       On Mon, Feb 15, 2016 at 12:19 PM, tkg_cangkul<
>>>>>>> [email protected]>
>>>>>>> <mailto:[email protected]>  wrote:
>>>>>>>
>>>>>>>       i try to subbmit spark job with oozie but it was failed with
>>>>>>> this
>>>>>>>
>>>>>>> message.
>>>>>>>>
>>>>>>>>       Main class [org.apache.oozie.action.hadoop.SparkMain], exit
>>>>>>>> code
>>>>>>>> [1]
>>>>>>>>
>>>>>>>>       is it any wrong configuration from me?
>>>>>>>>       this is my xml conf.
>>>>>>>>
>>>>>>>>       <workflow-app xmlns='uri:oozie:workflow:0.5'
>>>>>>>> name='tkg-cangkul'>
>>>>>>>>            <start to='spark-node' />
>>>>>>>>            <action name='spark-node'>
>>>>>>>>                <spark xmlns="uri:oozie:spark-action:0.1">
>>>>>>>>                    <job-tracker>${jobTracker}</job-tracker>
>>>>>>>>                    <name-node>${nameNode}</name-node>
>>>>>>>>                        <configuration>
>>>>>>>>                                <property>
>>>>>>>>       <name>mapred.job.queue.name  <http://mapred.job.queue.name
>>>>>>>>
>>>>>>>>> </name>
>>>>>>>>>
>>>>>>>>                                        <value>default</value>
>>>>>>>>                                </property>
>>>>>>>>                                <property>
>>>>>>>>       <name>oozie.launcher.mapred.job.queue.name  <
>>>>>>>> http://oozie.launcher.mapred.job.queue.name></name>
>>>>>>>>                                        <value>user1</value>
>>>>>>>>                                </property>
>>>>>>>>                        </configuration>
>>>>>>>>                    <master>yarn-cluster</master>
>>>>>>>>                    <name>Spark</name>
>>>>>>>>                    <class>cobaSpark.pack</class>
>>>>>>>>       <jar>hdfs://localhost:8020/user/apps/cobaSpark.jar</jar>
>>>>>>>>                    <arg>/user/apps/sample1.txt</arg>
>>>>>>>>                    <arg>/user/apps/oozie-spark/out</arg>
>>>>>>>>                </spark>
>>>>>>>>                <ok to="end" />
>>>>>>>>                <error to="fail" />
>>>>>>>>            </action>
>>>>>>>>            <kill name="fail">
>>>>>>>>                <message>Workflow failed, error
>>>>>>>>                    message[${wf:errorMessage(wf:lastErrorNode())}]
>>>>>>>>                </message>
>>>>>>>>            </kill>
>>>>>>>>            <end name='end' />
>>>>>>>>       </workflow-app>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> _____________________________________________________________
>>>>>> The information contained in this communication is intended solely for
>>>>>> the use of the individual or entity to whom it is addressed and others
>>>>>> authorized to receive it. It may contain confidential or legally
>>>>>> privileged
>>>>>> information. If you are not the intended recipient you are hereby
>>>>>> notified
>>>>>> that any disclosure, copying, distribution or taking any action in
>>>>>> reliance
>>>>>> on the contents of this information is strictly prohibited and may be
>>>>>> unlawful. If you have received this communication in error, please
>>>>>> notify
>>>>>> us immediately by responding to this email and then delete it from
>>>>>> your
>>>>>> system. The firm is neither liable for the proper and complete
>>>>>> transmission
>>>>>> of the information contained in this communication nor for any delay
>>>>>> in
>>>>>> its
>>>>>> receipt.
>>>>>>
>>>>>>
>>>>>>
>>
>


-- 
Best Regards

Jeff Zhang

Reply via email to