Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/731#issuecomment-90743453
@CodingCat Thanks for the latest changes. It is much simpler and I believe
it does what we want!
On a separate note, I had an offline discussion with @pwendell about the
config semantics. He actually proposes that we configure the number of cores an
executor will have exactly, rather than the maximum number cores it could have.
Meaning, instead of having `spark.deploy.maxCoresPerExecutor`, we will reuse
`spark.executor.cores` as suggested before, but modify the code a little to
make sure each executor has exactly N cores instead of at most N cores (where N
is the value of `spark.executor.cores`). I will make more suggestions inline to
indicate what I mean.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]