panbingkun created SPARK-38960:
----------------------------------

             Summary: Spark should fail fast if initial memory too large(set by 
"spark.executor.extraJavaOptions") for executor to start
                 Key: SPARK-38960
                 URL: https://issues.apache.org/jira/browse/SPARK-38960
             Project: Spark
          Issue Type: Improvement
          Components: Spark Core, Spark Submit, YARN
    Affects Versions: 3.4.0
            Reporter: panbingkun
             Fix For: 3.4.0


if you set initial memory(set by "spark.executor.extraJavaOptions=-Xms\{XXX}G" 
) larger than maximum memory(set by "spark.executor.memory")

Eg.

spark.executor.memory=1G

spark.executor.extraJavaOptions=-Xms2G

 

from the driver process you just see executor failures with no warning, since 
the more meaningful errors are buried in the executor logs. 

Eg., on Yarn, you see:

Error occurred during initialization of VM

Initial heap size set to a larger value than the maximum heap size

 

Instead we should just fail fast with a clear error message in the driver logs.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to