Github user aditanase commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22904#discussion_r239730534
  
    --- Diff: docs/running-on-kubernetes.md ---
    @@ -298,6 +298,16 @@ the Spark application.
     
     ## Kubernetes Features
     
    +### Configuration File
    +
    +Your Kubernetes config file typically lives under `.kube/config` in your 
home directory or in a location specified by the `KUBECONFIG` environment 
variable.  Spark on Kubernetes will attempt to use this file to do an initial 
auto-configuration of the Kubernetes client used to interact with the 
Kubernetes cluster.  A variety of Spark configuration properties are provided 
that allow further customising the client configuration e.g. using an 
alternative authentication method.
    --- End diff --
    
    Ok, I just opened https://issues.apache.org/jira/browse/SPARK-26295 and 
@vanzin redirected me to this thread. Would love your eyes on that issue, see 
if we can use your work here to close that too.
    
    In short, If there is code that propagates the kube context along this 
path, I'm not aware of it, would love to see some documentation:
    ```
    laptop with kubectl and context > k apply -f spark-driver-client-mode.yaml 
-> deployment starts 1 instance of driver pod in arbitrary namespace -> spark 
submit from start.sh inside the docker container -> ... 
    ```
    There is no kubectl or "kube context" in the docker container, it's just 
the spark distro and my jars. So where would the driver pod get the account 
from?
    
    PS: agreed that there are too many config options on the auth side, maybe 
we could consolidate them more.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to