Github user ifilonenko commented on the issue:

    https://github.com/apache/spark/pull/22624
  
    If we are talking about the token renewal functionality, could we possibly 
refactor the `HadoopFSDelegationTokenProvider` as well. I found that within the 
function `obtainDelegationTokens()`:
    
    This code-block:
    ```
        val fetchCreds = fetchDelegationTokens(getTokenRenewer(hadoopConf), 
fsToGetTokens, creds)
    
        // Get the token renewal interval if it is not set. It will only be 
called once.
        if (tokenRenewalInterval == null) {
          tokenRenewalInterval = getTokenRenewalInterval(hadoopConf, sparkConf, 
fsToGetTokens)
        }
    ```
    calls `fetchDelegationTokens()` twice since the `tokenRenewalInterval` will 
always be null upon creation of the `TokenManager` which I think is unnecessary 
in the case of Kubernetes (as you are creating 2 DTs when only one is needed.) 
Idk if use-case is different in Mesos / Yarn, but could this possibly be 
refactored to only call `fetchDelegationTokens()` once upon startup or to have 
a param to specify `tokenRenewalInterval`? I could send a follow-up PR if 
desired, but idk if this fits better within the scope of this PR. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to