[ https://issues.apache.org/jira/browse/SPARK-19090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15813973#comment-15813973 ]
Saisai Shao commented on SPARK-19090: ------------------------------------- Spark shell is a real spark *application*. The underlying SparkSubmit logics are the same... > Dynamic Resource Allocation not respecting spark.executor.cores > --------------------------------------------------------------- > > Key: SPARK-19090 > URL: https://issues.apache.org/jira/browse/SPARK-19090 > Project: Spark > Issue Type: Bug > Affects Versions: 1.5.2, 1.6.1, 2.0.1 > Reporter: nirav patel > > When enabling dynamic scheduling with yarn I see that all executors are using > only 1 core even if I specify "spark.executor.cores" to 6. If dynamic > scheduling is disabled then each executors will have 6 cores. i.e. it > respects "spark.executor.cores". I have tested this against spark 1.5 . I > think it will be the same behavior with 2.x as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org