Hi,

I am exploring a bit the dynamic resource allocation provided by the
Standalone Cluster Mode and I was wondering whether this behavior I am
experiencing is expected.
In my configuration I have 3 slaves with 24 cores each.
I have in my spark-defaults.conf:

spark.shuffle.service.enabled true
spark.dynamicAllocation.enabled true
spark.dynamicAllocation.minExecutors 1
spark.dynamicAllocation.maxExecutors 6
spark.executor.cores 4

When I submit a first Job it takes up all of the 72 cores of the cluster.
When I submit the second Job while the first one is running I get the error:

Initial job has not accepted any resources; check your cluster UI to ensure
that workers are registered and have sufficient resources

Is this the expected behavior?

Thanks a lot

Reply via email to