[ https://issues.apache.org/jira/browse/SPARK-19090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15813689#comment-15813689 ]
nirav patel edited comment on SPARK-19090 at 1/10/17 3:14 AM: -------------------------------------------------------------- [~jerryshao] "spark.executor.cores" is to tell spark AM to request no of vcores from Yarn per container. I think spark AM makes correct decision when dynamic allocation is off but when its on it ignores spark.executor.cores value. I think DRF has nothing to do with this issue. Following are AM logs from two different runs. Run 1: spark.dynamicAllocation.enabled = true spark.executor.instances = 6 spark.executor.cores = 5 Dynamic allocation = true 17/01/09 19:05:49 INFO yarn.YarnRMClient: Registering the ApplicationMaster 17/01/09 19:05:49 INFO yarn.YarnAllocator: Will request 6 executor containers, each with 1 cores and 11000 MB memory including 1000 MB overhead 17/01/09 19:05:49 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:1, disks:0.0>) 17/01/09 19:05:49 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:1, disks:0.0>) 17/01/09 19:05:49 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:1, disks:0.0>) 17/01/09 19:05:49 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:1, disks:0.0>) 17/01/09 19:05:49 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:1, disks:0.0>) 17/01/09 19:05:49 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:1, disks:0.0>) Run 2: spark.dynamicAllocation.enabled = false spark.executor.instances = 6 spark.executor.cores = 5 17/01/09 19:01:39 INFO yarn.YarnAllocator: Will request 6 executor containers, each with 5 cores and 11000 MB memory including 1000 MB overhead 17/01/09 19:01:39 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:5, disks:0.0>) 17/01/09 19:01:39 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:5, disks:0.0>) 17/01/09 19:01:39 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:5, disks:0.0>) 17/01/09 19:01:39 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:5, disks:0.0>) 17/01/09 19:01:39 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:5, disks:0.0>) 17/01/09 19:01:39 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:5, disks:0.0>) I can verify same fact via Spark UI when job is running that with dynamic allocation there is only 1 task running per executor. was (Author: tenstriker): [~jerryshao] "spark.executor.cores" is to tell spark AM to request no of vcores from Yarn per container. I think spark AM makes correct decision when dynamic allocation is off but when its on it ignores spark.executor.cores value. I think DRF has nothing to do with this in my opinion. Following are AM logs from two different runs. Run 1: spark.dynamicAllocation.enabled = true spark.executor.instances = 6 spark.executor.cores = 5 Dynamic allocation = true 17/01/09 19:05:49 INFO yarn.YarnRMClient: Registering the ApplicationMaster 17/01/09 19:05:49 INFO yarn.YarnAllocator: Will request 6 executor containers, each with 1 cores and 11000 MB memory including 1000 MB overhead 17/01/09 19:05:49 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:1, disks:0.0>) 17/01/09 19:05:49 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:1, disks:0.0>) 17/01/09 19:05:49 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:1, disks:0.0>) 17/01/09 19:05:49 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:1, disks:0.0>) 17/01/09 19:05:49 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:1, disks:0.0>) 17/01/09 19:05:49 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:1, disks:0.0>) Run 2: spark.dynamicAllocation.enabled = false spark.executor.instances = 6 spark.executor.cores = 5 17/01/09 19:01:39 INFO yarn.YarnAllocator: Will request 6 executor containers, each with 5 cores and 11000 MB memory including 1000 MB overhead 17/01/09 19:01:39 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:5, disks:0.0>) 17/01/09 19:01:39 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:5, disks:0.0>) 17/01/09 19:01:39 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:5, disks:0.0>) 17/01/09 19:01:39 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:5, disks:0.0>) 17/01/09 19:01:39 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:5, disks:0.0>) 17/01/09 19:01:39 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:11000, vCores:5, disks:0.0>) I can verify same fact via Spark UI when job is running that with dynamic allocation there is only 1 task running per executor. > Dynamic Resource Allocation not respecting spark.executor.cores > --------------------------------------------------------------- > > Key: SPARK-19090 > URL: https://issues.apache.org/jira/browse/SPARK-19090 > Project: Spark > Issue Type: Bug > Affects Versions: 1.5.2, 1.6.1, 2.0.1 > Reporter: nirav patel > > When enabling dynamic scheduling with yarn I see that all executors are using > only 1 core even if I specify "spark.executor.cores" to 6. If dynamic > scheduling is disabled then each executors will have 6 cores. i.e. it > respects "spark.executor.cores". I have tested this against spark 1.5 . I > think it will be the same behavior with 2.x as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org