Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1256#discussion_r14668475
  
    --- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
    @@ -36,20 +38,27 @@ import scala.collection.mutable.HashMap
      * Note that once a SparkConf object is passed to Spark, it is cloned and 
can no longer be modified
      * by the user. Spark does not support modifying the configuration at 
runtime.
      *
    - * @param loadDefaults whether to also load values from Java system 
properties
    + * @param loadDefaults whether to also load values from Java system 
properties, file and resource
    + * @param fileName load properties from file
    --- End diff --
    
    Also I don't see any code ever using the `resource` argument. Is it really 
needed? Unless it's somehow hooked into SparkSubmit, I don't see it being very 
useful.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to