Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/3861#issuecomment-69145425
Hi @andrewor14, yes the concerns are identical. Basically in coarse grain
mode, I don't over allocate executors even if you request more than the
available slaves.
So really dynamic allocation in coarse grain mode simply provides a way to
scale down the executors, and when more is needed to scale back maximumly to
the available slave nodes and also still within spark.cores.max.
This could change ofcourse as there are discussions to allow coarse grain
mode to launch multiple executors, but that's a seperate discussion.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]