[ https://issues.apache.org/jira/browse/SPARK-19090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15813941#comment-15813941 ]
Saisai Shao commented on SPARK-19090: ------------------------------------- {code} ./bin/spark-shell --master yarn-client --conf spark.executor.cores=2 {code} Please be aware that executor number (--num-executors/spark.executor.instances) and dynamic allocation cannot be coexisted, otherwise dynamic allocation will be turned off implicitly. So in your case you set executor numbers also, which means dynamic allocation is not on actually. > Dynamic Resource Allocation not respecting spark.executor.cores > --------------------------------------------------------------- > > Key: SPARK-19090 > URL: https://issues.apache.org/jira/browse/SPARK-19090 > Project: Spark > Issue Type: Bug > Affects Versions: 1.5.2, 1.6.1, 2.0.1 > Reporter: nirav patel > > When enabling dynamic scheduling with yarn I see that all executors are using > only 1 core even if I specify "spark.executor.cores" to 6. If dynamic > scheduling is disabled then each executors will have 6 cores. i.e. it > respects "spark.executor.cores". I have tested this against spark 1.5 . I > think it will be the same behavior with 2.x as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org