Github user tgravescs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/4688#discussion_r27220592
  
    --- Diff: 
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
 ---
    @@ -234,9 +236,14 @@ class CoarseGrainedSchedulerBackend(scheduler: 
TaskSchedulerImpl, val actorSyste
             properties += ((key, value))
           }
         }
    +
         // TODO (prashant) send conf instead of properties
         driverActor = actorSystem.actorOf(
           Props(new DriverActor(properties)), name = 
CoarseGrainedSchedulerBackend.ACTOR_NAME)
    +
    +    // If a principal and keytab have been set, use that to create new 
credentials for executors
    +    // periodically
    +    SparkHadoopUtil.get.scheduleLoginFromKeytab()
    --- End diff --
    
    All the logic for this is really in the YarnSparkHadoopUtil and it only 
applies to yarn.  I could see some of this possibly being reused for standalone 
mode but everything else is setup for just yarn, so perhaps a better place for 
this is in the YarnSchedulerBackend.   Which brings me to my second thought 
which is we seem to be putting a lot of stuff in YarnSparkHadoopUtil now.  It 
used to be just utility functions and now we have threads and stuff not 
interfaces through SparkHadoopUtil.    Perhaps having some sort of 
security/token manager would be a better split.
    
    thoughts from others on this?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to