Github user jacek-lewandowski commented on the pull request:

    https://github.com/apache/spark/pull/3571#issuecomment-69045861
  
    @vanzin I read about Yarn and Mesos deployments and found that making those 
namespaces does not make much sense because you cannot have different 
configurations in a single actor system. 
    
    For standalone deployment, it will work as I said before - the user can 
provide his own configuration in SparkConf, but to fetch it, the worker 
configuration will be used and then it will be overridden by the user 
configuration.
    
    For YARN, we still need some settings to fetch the configuration from the 
driver. What I did was to repeat the same solution as it is applied for 
spark.auth namespace - SSL settings are required before the actual 
configuration is applied (the same for secret token - as it is implemented 
right now).
    
    What do you think? I really want to move this PR forward. 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to