Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/560#discussion_r13870892
  
    --- Diff: 
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientBase.scala ---
    @@ -342,24 +352,16 @@ trait ClientBase extends Logging {
           sparkConf.set("spark.driver.extraJavaOptions", opts)
         }
     
    +    // Forward the Spark configuration to the application master / 
executors.
         // TODO: it might be nicer to pass these as an internal environment 
variable rather than
         // as Java options, due to complications with string parsing of nested 
quotes.
    -    if (args.amClass == classOf[ExecutorLauncher].getName) {
    -      // If we are being launched in client mode, forward the spark-conf 
options
    -      // onto the executor launcher
    -      for ((k, v) <- sparkConf.getAll) {
    -        javaOpts += "-D" + k + "=" + "\\\"" + v + "\\\""
    --- End diff --
    
    So here's what's happening.
    
    `prepareLocalResources()` propagates some settings (previously 
`CONF_SPARK_YARN_SECONDARY_JARS`, and now the two new settings I'm adding) 
using SparkConf.set. In cluster mode, `createContainerLaunchContext()` 
propagates only things that are set as system properties, so those config 
options are not provided to the executors. So if your executor depends on 
something that is in those options, it will fail.
    
    I'll double-check whether `SPARK_JAVA_OPTS` works as before and make 
adjustments if necessary, but the change is needed for things to work correctly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to