Github user rvesse commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22904#discussion_r237447038
  
    --- Diff: docs/running-on-kubernetes.md ---
    @@ -298,6 +298,16 @@ the Spark application.
     
     ## Kubernetes Features
     
    +### Configuration File
    +
    +Your Kubernetes config file typically lives under `.kube/config` in your 
home directory or in a location specified by the `KUBECONFIG` environment 
variable.  Spark on Kubernetes will attempt to use this file to do an initial 
auto-configuration of the Kubernetes client used to interact with the 
Kubernetes cluster.  A variety of Spark configuration properties are provided 
that allow further customising the client configuration e.g. using an 
alternative authentication method.
    --- End diff --
    
    To be frank I'm not really sure why there are different config options for 
client vs cluster mode and this may have changed with some of the cleanup that 
@vanzin has been doing lately to simplify the configuration code.
    
    Personally I have never needed to use any of the additional configuration 
properties in either client/cluster mode as the auto-configuration from my K8S 
config file has always been sufficient.  At worst I've needed to set 
`KUBECONFIG` to select the correct config file for the cluster I want to submit 
to.
    
    Note that the core behaviour (the auto-configuration) has always existed 
implicitly in the K8S backend but was just not called out explicitly previously 
in the docs.  This PR primarily just makes it more explicit and flexible for 
users who have multiple contexts in their config files.
    
    WRT `spark.master` Spark in general requires that to always be set and will 
use that to override whatever is present in the K8S config file regardless.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to