Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19272#discussion_r143856998
  
    --- Diff: 
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
 ---
    @@ -194,6 +198,27 @@ private[spark] class 
MesosCoarseGrainedSchedulerBackend(
           sc.conf.getOption("spark.mesos.driver.frameworkId").map(_ + suffix)
         )
     
    +    // check that the credentials are defined, even though it's likely 
that auth would have failed
    +    // already if you've made it this far
    +    if (principal != null && hadoopDelegationCreds.isDefined) {
    +      logDebug(s"Principal found ($principal) starting token renewer")
    +      val credentialRenewerThread = new Thread {
    +        setName("MesosCredentialRenewer")
    +        override def run(): Unit = {
    +          val rt = 
MesosCredentialRenewer.getTokenRenewalTime(hadoopDelegationCreds.get, conf)
    --- End diff --
    
    So, you need this because `hadoopDelegationCreds` doesn't keep the 
information about when the tokens should be renewed (a.k.a. the return value of 
`obtainDelegationTokens`). Perhaps some minor refactoring would help clean this 
up?
    
    In fact, `hadoopDelegationCreds` is a `val`, so any executors that start 
after the initial token set expires will fail, no? Because they'll fetch 
`hadoopDelegationCreds` from the driver, and won't get the 
`UpdateDelegationTokens` until it's way too late.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to