Yes, it will.
In general: Spark should spawn as many executors as it can to eat up all
the resources on a node.


On Tue, Feb 26, 2019, 11:59 Anton Puzanov <antonpuzdeve...@gmail.com wrote:

> Hello everyone,
>
> Spark has a dynamic resource allocation scheme, where, when available
> Spark manager will automatically add executors to the application resource.
>
> Spark's default configuration is for executors to allocate the entire
> worker node they are running on, but this is configurable, my question is,
> if an executor is set to use half of the worker node. Is it possible that
> Spark will spawn two executors which belong to the same application on the
> same worker node?
>
> Thanks,
> Anton.
>

Reply via email to