Hi,

I am trying to understand the spark internals ,so was looking the spark
code flow. Now in a scenario where i do a spark-submit in yarn cluster mode
with --executor-memory 8g via command line ,now how does spark know about
this exectuor memory value ,since in SparkContext i see :

_executorMemory = _conf.getOption("spark.executor.memory")
                        .orElse(Option(System.getenv("SPARK_EXECUTOR_MEMORY")))
                       .orElse(Option(System.getenv("SPARK_MEM"))


Now SparkConf loads the default from Java System Properties ,but then i did
not find where the command line value is added to Java System Properties
sys.props in yarn cluster mode ie did not see a call to
Utils.loadDefaultSparkProperties.How
is this default command line value reaching the SparkConf which is part of
SparkContext.

Regards,
Vinyas

Reply via email to