Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4168#issuecomment-73456491
@sryza Is the API change in `ExecutorAllocationManager` necessary for this
change? The new API essentially pushes the responsibility of maintaining the
number of executors that the application currently has to the user. If I as a
Spark application want to incrementally add executors, then I must additionally
keep track of the number of executors we currently have as we used to do in
`CoarseGrainedSchedulerBackend`. I actually don't see a great use case for
something like `sc.setTotalExecutors` because it kinda expects the user to know
how many executors they think they should need, and this estimation is often
difficult.
The rest of it looks fairly straightforward. My comments mostly have to do
with variable naming.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]