hi all, I have a job that works ok in yarn-client mode,but when I try in yarn-cluster mode it returns the following:
WARN YarnClusterScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory the cluster has plenty of memory and resources. I am running this from python using this context: conf = (SparkConf() .setMaster("yarn-cluster") .setAppName("spark_tornado_server") .set("spark.executor.memory", "1024m") .set("spark.cores.max", 16) .set("spark.driver.memory", "1024m") .set("spark.executor.instances", 2) .set("spark.executor.cores", 8) .set("spark.eventLog.enabled", False) HADOOP_HOME and HADOOP_CONF_DIR are also set in spark-env. thanks, not sure if I am missing some config -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/yarn-does-not-accept-job-in-cluster-mode-tp15281.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org