Github user rvesse commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21669#discussion_r215695909
  
    --- Diff: 
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
 ---
    @@ -212,6 +212,60 @@ private[spark] object Config extends Logging {
             "Ensure that major Python version is either Python2 or Python3")
           .createWithDefault("2")
     
    +  val KUBERNETES_KERBEROS_PROXY_USER =
    +    ConfigBuilder("spark.kubernetes.kerberos.proxyUser")
    +      .doc("Specify the proxy user " +
    +        "for HadoopUGI login for the Driver + Executors")
    +      .internal()
    +      .stringConf
    +      .createWithDefault("false")
    +
    +  val KUBERNETES_KERBEROS_KRB5_FILE =
    +    ConfigBuilder("spark.kubernetes.kerberos.krb5location")
    +      .doc("Specify the location of the krb5 file " +
    --- End diff --
    
    Would be worth being explicit here that the KDC defined needs to be visible 
from inside the containers which may not be the case in some network setups.
    
    For example we have clusters with multiple networks and K8S is bound to a 
different network than the network that our default Kerberos configuration 
points to.  So I had to copy and customise the default KDC for Kerberos login 
to get this running during my testing.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to