I think I found the problem.

Have to change the yarn capacity scheduler to use

DominantResourceCalculator

Thanks!


On Fri, Sep 25, 2015 at 4:54 AM, Akhil Das <ak...@sigmoidanalytics.com>
wrote:

> Which version of spark are you having? Can you also check whats set in
> your conf/spark-defaults.conf file?
>
> Thanks
> Best Regards
>
> On Fri, Sep 25, 2015 at 1:58 AM, Gavin Yue <yue.yuany...@gmail.com> wrote:
>
>> Running Spark app over Yarn 2.7
>>
>> Here is my sparksubmit setting:
>> --master yarn-cluster \
>>  --num-executors 100 \
>>  --executor-cores 3 \
>>  --executor-memory 20g \
>>  --driver-memory 20g \
>>  --driver-cores 2 \
>>
>> But the executor cores setting is not working. It always assigns only one
>> vcore  to one container based on the cluster metrics from yarn resource
>> manager website.
>>
>> And yarn setting for container is
>> min: <memory:6600, vCores:4>  max: <memory:106473, vCores:15>
>>
>> I have tried to change num-executors and executor memory. It even ignores
>> the min cCores setting and always assign one core per container.
>>
>> Any advice?
>>
>> Thank you!
>>
>>
>>
>>
>>
>

Reply via email to