Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15009#discussion_r111229901
  
    --- Diff: 
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala 
---
    @@ -58,14 +58,24 @@ import org.apache.spark.util.{CallerContext, Utils}
     private[spark] class Client(
         val args: ClientArguments,
         val hadoopConf: Configuration,
    -    val sparkConf: SparkConf)
    +    val sparkConf: SparkConf,
    +    val sysEnvironment: scala.collection.immutable.Map[String, String])
    --- End diff --
    
    So, when I requested that you made up your mind about whether to support 
custom envs or not, *this* is what I meant. You can revert all changes to this 
file that now use `sysEnvironment` instead of `sys.env`, including the new 
constructor that takes an env map and the new parameter to `populateClasspath`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to