[ https://issues.apache.org/jira/browse/SPARK-4730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Josh Rosen resolved SPARK-4730. ------------------------------- Resolution: Fixed Fix Version/s: 1.2.1 1.3.0 Issue resolved by pull request 3590 [https://github.com/apache/spark/pull/3590] > Warn against deprecated YARN settings > ------------------------------------- > > Key: SPARK-4730 > URL: https://issues.apache.org/jira/browse/SPARK-4730 > Project: Spark > Issue Type: Bug > Components: YARN > Affects Versions: 1.2.0 > Reporter: Andrew Or > Assignee: Andrew Or > Fix For: 1.3.0, 1.2.1 > > > Yarn currently reads from SPARK_MASTER_MEMORY and SPARK_WORKER_MEMORY. If you > have these settings leftover from a standalone cluster setup and you try to > run Spark on Yarn on the same cluster, then your executors suddenly get the > amount of memory specified through SPARK_WORKER_MEMORY. > This behavior is due in large part to backward compatibility. However, we > should log a warning against the use of these variables at the very least. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org