Github user kalvinnchau commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19272#discussion_r140830852
  
    --- Diff: 
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
 ---
    @@ -194,6 +198,26 @@ private[spark] class 
MesosCoarseGrainedSchedulerBackend(
           sc.conf.getOption("spark.mesos.driver.frameworkId").map(_ + suffix)
         )
     
    +    // check that the credentials are defined, even though it's likely 
that auth would have failed
    +    // already if you've made it this far
    +    if (principal != null && hadoopDelegationCreds.isDefined) {
    +      logDebug(s"Principal found ($principal) starting token renewer")
    +      val credentialRenewerThread = new Thread {
    +        setName("MesosCredentialRenewer")
    +        override def run(): Unit = {
    +          val credentialRenewer =
    +            new MesosCredentialRenewer(
    +              conf,
    +              hadoopDelegationTokenManager.get,
    +              
MesosCredentialRenewer.getTokenRenewalTime(hadoopDelegationCreds.get, conf),
    --- End diff --
    
    This sets the first renewal time to be the expiration time of the token.
    
    It should be similar to the way next renewal time in the 
MesosCredentialRenewer class is calculated so that it renews the first token 
after 75% of expiration time has passed:
    
    ```scala
    val currTime = System.currentTimeMillis()
    val renewTime = 
MesosCredentialRenewer.getTokenRenewalTime(hadoopDelegationCreds.get, conf)
    val rt = 0.75 * (renewTime - currTime)
    
    val credentialRenewer =
       new MesosCredentialRenewer(
         conf,
         hadoopDelegationTokenManager.get,
         (currTime + rt).toLong,
         driverEndpoint)
    credentialRenewer.scheduleTokenRenewal()
    ```



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to