Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/3686#issuecomment-66776483
@scwf
There are many different reasons one might want to specify the cores for
the AM. It mostly applies when running in yarn-cluster mode and your driver is
on the AM container also. Your driver could be doing many different things
including cpu intensive stuff where you want to have more cores. You may just
want to try to isolate your AM on a node by itself so you can artificially tell
it to use a bunch of cores. Your yarn cluster might be configured so 1 cores
doesn't actually mean 1 core - for instance the cluster is configured to allow
scheduling 1/2 core. Its an option on yarn so we should allow the user to
specify if they need to.
It should be similar to the standalone config:
Spark standalone with cluster deploy mode only:
--driver-cores NUM Cores for driver (Default: 1).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]