[
https://issues.apache.org/jira/browse/SPARK-19090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15813801#comment-15813801
]
Saisai Shao commented on SPARK-19090:
-------------------------------------
I also tested with Spark 1.5.0, I don't see an issue here, the core number is
still expected as I set:
{noformat}
17/01/10 12:00:31 INFO yarn.YarnRMClient: Registering the ApplicationMaster
17/01/10 12:00:31 INFO yarn.YarnAllocator: Will request 1 executor containers,
each with 2 cores and 1408 MB memory including 384 MB overhead
17/01/10 12:00:31 INFO yarn.YarnAllocator: Container request (host: Any,
capability: <memory:1408, vCores:2>)
17/01/10 12:00:31 INFO yarn.ApplicationMaster: Started progress reporter thread
with (heartbeat : 3000, initial allocation : 200) intervals
{noformat}
Can you please tell how do you run the application?
> Dynamic Resource Allocation not respecting spark.executor.cores
> ---------------------------------------------------------------
>
> Key: SPARK-19090
> URL: https://issues.apache.org/jira/browse/SPARK-19090
> Project: Spark
> Issue Type: Bug
> Affects Versions: 1.5.2, 1.6.1, 2.0.1
> Reporter: nirav patel
>
> When enabling dynamic scheduling with yarn I see that all executors are using
> only 1 core even if I specify "spark.executor.cores" to 6. If dynamic
> scheduling is disabled then each executors will have 6 cores. i.e. it
> respects "spark.executor.cores". I have tested this against spark 1.5 . I
> think it will be the same behavior with 2.x as well.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]