[
https://issues.apache.org/jira/browse/SPARK-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340972#comment-14340972
]
Marcelo Vanzin commented on SPARK-6050:
---------------------------------------
That sounds like a potentially different issue. But since the requested
resources are static in YarnAllocator, I don't see how the code would only use
6 containers when the RM sends more.
Unless the RM can send back containers that don't match your request (aside
from this weird situation for which we're adding the workaround). Otherwise,
the code should match all of them and YarnAllocator should use all of them.
> Spark on YARN does not work --executor-cores is specified
> ---------------------------------------------------------
>
> Key: SPARK-6050
> URL: https://issues.apache.org/jira/browse/SPARK-6050
> Project: Spark
> Issue Type: Bug
> Components: YARN
> Affects Versions: 1.3.0
> Environment: 2.5 based YARN cluster.
> Reporter: Mridul Muralidharan
> Priority: Blocker
>
> There are multiple issues here (which I will detail as comments), but to
> reproduce running the following ALWAYS hangs in our cluster with the 1.3 RC
> ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
> yarn-cluster --executor-cores 8 --num-executors 15 --driver-memory 4g
> --executor-memory 2g --queue webmap lib/spark-examples*.jar
> 10
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]