Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/560#discussion_r13867773
  
    --- Diff: 
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientBase.scala ---
    @@ -342,24 +352,16 @@ trait ClientBase extends Logging {
           sparkConf.set("spark.driver.extraJavaOptions", opts)
         }
     
    +    // Forward the Spark configuration to the application master / 
executors.
         // TODO: it might be nicer to pass these as an internal environment 
variable rather than
         // as Java options, due to complications with string parsing of nested 
quotes.
    -    if (args.amClass == classOf[ExecutorLauncher].getName) {
    -      // If we are being launched in client mode, forward the spark-conf 
options
    -      // onto the executor launcher
    -      for ((k, v) <- sparkConf.getAll) {
    -        javaOpts += "-D" + k + "=" + "\\\"" + v + "\\\""
    --- End diff --
    
    Can you elaborate? SparkConf will include all the system properties that 
start with "spark", which is exactly what the previous code was doing when 
reading system properties directly.
    
    I'll re-test without this fix (my tests were failing without it but after 
reading the rest of the code I'm not so sure it was because of this), but in 
general I think this is a cleaner way of doing it (since it treats both cases 
as the same).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to