Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13499#discussion_r65764703
  
    --- Diff: 
yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnClientSchedulerBackend.scala
 ---
    @@ -64,7 +68,7 @@ private[spark] class YarnClientSchedulerBackend(
         // SPARK-8851: In yarn-client mode, the AM still does the credentials 
refresh. The driver
         // reads the credentials from HDFS, just like the executors and 
updates its own credentials
         // cache.
    -    if (conf.contains("spark.yarn.credentials.file")) {
    +    if (!conf.contains(PRINCIPAL.key) && 
conf.contains("spark.yarn.credentials.file")) {
    --- End diff --
    
    That's a different issue, and I'm not sure it's really an issue, given what 
the comment above this line says.
    
    This code is here for when Spark is managing your kerberos credentials and 
delegation tokens, i.e., when you pass `--principal` and `--keytab`. In that 
case, as the comment above says, the AM is tasked with renewing the credentials 
and tokens periodically, and this code exists so that the driver (which is not 
running in the AM in client mode) can get the new tokens.
    
    You're basically breaking that feature by changing this. If your app is 
managing the kerberos login, you'd never pass `--principal` and `--keytab` (or 
the equivalent settings) to Spark, so you wouldn't run into this problem.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to