Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/7532#issuecomment-123531568
Hey, @andrewor14 , this patch is a very useful, especially for the users
like me, who runs Spark in standalone mode
I left more comments here, most for the outdated comments in SparkContext
which says the dynamic allocations are only support in YARN mode
the other question is that
from the code, I think what will happen if `an application requests (call
sc.requestTotalExecutors(x)) less resources than what it has been assigned` is
that `it will not get more resources but still runs with the resources in
hand`, right?
if so...I'm questioning that if it is a good way to manage the resources,
the application has to release the resources by explicitly calling
sc.killExecutors()? However...killExecutors is pretty hard to use, I need to
get the exact IDs of executors....can we bring a new API like
`killExecutors(numExectors: Int)`?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]