> Nope. When using dynamic allocation, it's a static number of executors
> throughout the job, no max or min. If you specify the max value config,
> it'd be either ignored or result in an error (likely just ignored - haven't
> tested).

No, you can still specify the min and max number of executors even if
you enable dynamic allocation, by using:
- spark.dynamicAllocation.maxExecutors (default: infinity)
- spark.dynamicAllocation.minExecutors (default: 0)

That's why I was suggesting to simply change:
  --num-executors ${SPK_EXEC} \
into:
  --conf spark.dynamicAllocation.maxExecutors=${SPK_EXEC} \

Also, do we need spark.shuffle.service.enabled=true, as it seems to be
the case in the docs?

Giacomo

Reply via email to