[ 
https://issues.apache.org/jira/browse/SPARK-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340379#comment-14340379
 ] 

Mridul Muralidharan commented on SPARK-6050:
--------------------------------------------


[~tgraves] You are right, cpu scheduling is not turned on in our cluster.
In the description, I am specifying number of cores as 8.

For our jobs, I request for the entire memory of the node to use it completely.
We need to specify number of cores in order for spark to launch multiple 
threads on the executor (else it will be large memory, single thread) : and 
'--executor-cores' is the means to do so.

So either we will need to check if cpu scheduling is turned on in yarn before 
specifying cores as a resource (if off, use 1), or assume the user knows best 
and always ask for '1' core and spin-off multiple threads like in 1.2.

> Spark on YARN does not work --executor-cores is specified
> ---------------------------------------------------------
>
>                 Key: SPARK-6050
>                 URL: https://issues.apache.org/jira/browse/SPARK-6050
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.3.0
>         Environment: 2.5 based YARN cluster.
>            Reporter: Mridul Muralidharan
>            Priority: Blocker
>
> There are multiple issues here (which I will detail as comments), but to 
> reproduce running the following ALWAYS hangs in our cluster with the 1.3 RC
> ./bin/spark-submit --class org.apache.spark.examples.SparkPi     --master 
> yarn-cluster --executor-cores 8    --num-executors 15     --driver-memory 4g  
>    --executor-memory 2g          --queue webmap     lib/spark-examples*.jar   
>   10



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to