Hi

"In general, configuration values explicitly set on a SparkConf take the
highest precedence, then flags passed to spark-submit, then values in the
defaults file."
https://spark.apache.org/docs/latest/submitting-applications.html

Perhaps this will help Vinyas:
Look at args.sparkProperties in
https://github.com/apache/spark/blob/v2.3.0/core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala

On Thu, Mar 15, 2018 at 1:53 AM, Vinyas Shetty <vinyasshett...@gmail.com>
wrote:

>
> Hi,
>
> I am trying to understand the spark internals ,so was looking the spark
> code flow. Now in a scenario where i do a spark-submit in yarn cluster mode
> with --executor-memory 8g via command line ,now how does spark know about
> this exectuor memory value ,since in SparkContext i see :
>
> _executorMemory = _conf.getOption("spark.executor.memory")
>                         
> .orElse(Option(System.getenv("SPARK_EXECUTOR_MEMORY")))
>                        .orElse(Option(System.getenv("SPARK_MEM"))
>
>
> Now SparkConf loads the default from Java System Properties ,but then i
> did not find where the command line value is added to Java System
> Properties sys.props in yarn cluster mode ie did not see a call to
> Utils.loadDefaultSparkProperties.How is this default command line value
> reaching the SparkConf which is part of SparkContext.
>
> Regards,
> Vinyas
>
>
  • Spark Conf Vinyas Shetty
    • Re: Spark Conf Neil Jonkers

Reply via email to