Hi,

I am using Spark 1.6.1 and Yarn 2.7.4.
I want to submit a Spark application to a Yarn cluster. However, I found
that the number of vcores assigned to a container/executor is always 1,
even if I set spark.executor.cores=2. I also found the number of tasks an
executor runs concurrently is 2. So, it seems that Spark knows that an
executor/container has two CPU cores but the request is not correctly sent
to Yarn resource scheduler. I am using
the 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
on Yarn.

I am wondering that is it possible to assign multiple vcores to a container
when a Spark job is submitted to a Yarn cluster in yarn-cluster mode.

Thanks!
Best,
Xiaoye

Reply via email to