[ 
https://issues.apache.org/jira/browse/SPARK-14190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15214097#comment-15214097
 ] 

Sean Owen commented on SPARK-14190:
-----------------------------------

This means the cluster is not giving your app resources. You're not showing 
enough info to see why, but that alone is not a Spark problem. Please 
investigate the logs and your YARN settings first.

> when spark.dynamicAllocation.enabled=true the application is expecting more 
> resources and not able to use the resources available with Resourcemanager
> ------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-14190
>                 URL: https://issues.apache.org/jira/browse/SPARK-14190
>             Project: Spark
>          Issue Type: Bug
>            Reporter: Ramgopal N
>
> I am using spark-1.5.1-bin-hadoop2.6
> I have configured "spark.shuffle.service.enabled=true" and running the tests. 
> It is giving the below WARN messages and running forever..
> On the RM UI  the number of VCores=570 , Memory Total=3TB, Memory Used=14GB
> 16/03/28 00:39:01 WARN YarnScheduler: Initial job has not accepted any 
> resources; check your cluster UI to ensure that workers are registered and 
> have sufficient resources
> 16/03/28 00:39:16 WARN YarnScheduler: Initial job has not accepted any 
> resources; check your cluster UI to ensure that workers are registered and 
> have sufficient resources
> When "spark.dynamicAllocation.enabled" is not enabled, the application is 
> successful when executors,driver memory,executor instances set specifically



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to