[ https://issues.apache.org/jira/browse/SPARK-38960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17524654#comment-17524654 ]
panbingkun commented on SPARK-38960: ------------------------------------ I will do it > Spark should fail fast if initial memory too large(set by > "spark.executor.extraJavaOptions") for executor to start > ------------------------------------------------------------------------------------------------------------------ > > Key: SPARK-38960 > URL: https://issues.apache.org/jira/browse/SPARK-38960 > Project: Spark > Issue Type: Improvement > Components: Spark Core, Spark Submit, YARN > Affects Versions: 3.4.0 > Reporter: panbingkun > Priority: Minor > Fix For: 3.4.0 > > > if you set initial memory(set by > "spark.executor.extraJavaOptions=-Xms\{XXX}G" ) larger than maximum > memory(set by "spark.executor.memory") > Eg. > *spark.executor.memory=1G* > *spark.executor.extraJavaOptions=-Xms2G* > > from the driver process you just see executor failures with no warning, since > the more meaningful errors are buried in the executor logs. > Eg., on Yarn, you see: > {noformat} > Error occurred during initialization of VM > Initial heap size set to a larger value than the maximum heap size{noformat} > Instead we should just fail fast with a clear error message in the driver > logs. -- This message was sent by Atlassian Jira (v8.20.7#820007) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org