I guess you're using DefaultResourceCalculator for capacity scheduler, can
you please check you capacity scheduler configuration?

By default, this resource calculator will only honor memory as resource
calculator, so vcores will always show 1 not matter what values you set
(but Spark internally got the correct core number, so the logic is fine,
only yarn web ui is misleading).

You could change to DominantResourceCalculator, in this configuration, cpu
is also a resource to be calculated, and you'll see the right vcores you
requested in the Yarn web ui.

Thanks
Saisai


On Tue, Dec 22, 2015 at 9:25 AM, Siva <sbhavan...@gmail.com> wrote:

> Hi Saisai,
>
> Total Vcores available in yarn applications web UI (runs on 8088) before
> and after only varies with number of executors + driver core. If I give 10
> executors, I see only 11 vcores being used in yarn application web UI.
>
> Thanks,
> Sivakumar Bhavanari.
>
> On Mon, Dec 21, 2015 at 5:21 PM, Saisai Shao <sai.sai.s...@gmail.com>
> wrote:
>
>> Hi Siva,
>>
>> How did you know that --executor-cores is ignored and where did you see
>> that only 1 Vcore is allocated?
>>
>> Thanks
>> Saisai
>>
>> On Tue, Dec 22, 2015 at 9:08 AM, Siva <sbhavan...@gmail.com> wrote:
>>
>>> Hi Everyone,
>>>
>>> Observing a strange problem while submitting spark streaming job in
>>> yarn-cluster mode through spark-submit. All the executors are using only 1
>>> Vcore  irrespective value of the parameter --executor-cores.
>>>
>>> Are there any config parameters overrides --executor-cores value?
>>>
>>> Thanks,
>>> Sivakumar Bhavanari.
>>>
>>
>>
>

Reply via email to