Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/3607#issuecomment-69148864
@tgravescs @andrewor14
Sorry to produce so much issues in last rebase.
I've fixed them and tested again on my cluster. Here is the
configuration(with `yarn.scheduler.minimum-allocation-mb=256` in yarn):
>spark.driver.memory=5G
spark.yarn.driver.memoryOverhead=1024
spark.yarn.am.memory=256m
spark.yarn.am.memoryOverhead=256
spark.yarn.executor.memoryOverhead=1024
spark.executor.memory=1g
spark.executor.instances=1
In cluster mode, it will launch two container: one used 6G, another 2G. In
client mode they are 512M and 2G.
Then keep spark-defaults.conf unchanged with command:
`./spark-submit --class org.apache.spark.examples.SparkPi --master
yarn-cluster(yarn-client) --driver-memory 4G --executor-memory 1280m
../lib/spark-examples*.jar`
In cluster mode, one used 5G, another 2.25G. In client mode, they are 512M
and 2.25G.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]