I am running a 3 node(32 core, 60gb) Yarn cluster for Spark jobs. 1) Below are my Yarn memory settings
yarn.nodemanager.resource.memory-mb = 52224 yarn.scheduler.minimum-allocation-mb = 40960 yarn.scheduler.maximum-allocation-mb = 52224 Apache Spark Memory Settings export SPARK_EXECUTOR_MEMORY=40G export SPARK_EXECUTOR_CORES=27 export SPARK_EXECUTOR_INSTANCES=3 With above settings I am hoping to see my job run on two nodes how ever the the job is not running on the node where Application Master is running. 2) Yarn memory settings yarn.nodemanager.resource.memory-mb = 52224 yarn.scheduler.minimum-allocation-mb = 20480 yarn.scheduler.maximum-allocation-mb = 52224 Apache Spark Memory Settings export SPARK_EXECUTOR_MEMORY=18G export SPARK_EXECUTOR_CORES=13 export SPARK_EXECUTOR_INSTANCES=4 I would like to know how can I run the job on both the nodes with the first memory settings ? Thanks for the help. -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-yarn-cluster-Application-Master-not-running-yarn-container-tp19761.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org