Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4027#issuecomment-167899082
> Earlier, you had also suggested offering an option for the amount of
memory per executor. Is that still valid in your proposal?
What do you mean? You can already do that through `spark.executor.memory`,
even before this patch.
> At one point, you also suggested that the framework should also execute
as many executors as needed to use all or nearly all the cores on each node. I
would prefer that this is overridable by specifying the maximum number of
executors to use per node. This makes it easier to use Spark on a cluster
shared by multiple users or applications.
I agree, though we should try to come up with a minimal set of
configurations that conflict with each other least. I haven't decided exactly
what those would look like but it could come in a later patch.
> It's really unfortunate that this patch was closed without merging.
Actually it will be re-opened shortly, just with a slightly different
approach. I believe @tnachen is currently on vacation but once he comes back
we'll move forward again. :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]