Hello everyone,

Spark has a dynamic resource allocation scheme, where, when available Spark
manager will automatically add executors to the application resource.

Spark's default configuration is for executors to allocate the entire
worker node they are running on, but this is configurable, my question is,
if an executor is set to use half of the worker node. Is it possible that
Spark will spawn two executors which belong to the same application on the
same worker node?

Thanks,
Anton.

Reply via email to