Github user redsanket commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19140#discussion_r137096611
  
    --- Diff: 
core/src/main/scala/org/apache/spark/deploy/security/HadoopFSDelegationTokenProvider.scala
 ---
    @@ -103,15 +103,17 @@ private[deploy] class 
HadoopFSDelegationTokenProvider(fileSystems: Configuration
     
       private def getTokenRenewalInterval(
           hadoopConf: Configuration,
    -      filesystems: Set[FileSystem]): Option[Long] = {
    +      filesystems: Set[FileSystem],
    +      creds:Credentials): Option[Long] = {
         // We cannot use the tokens generated with renewer yarn. Trying to 
renew
         // those will fail with an access control issue. So create new tokens 
with the logged in
         // user as renewer.
    -    val creds = fetchDelegationTokens(
    +    val fetchCreds = fetchDelegationTokens(
    --- End diff --
    
    Also here the diff in spark2.2 and master
    => is missing PRINCPAL(aka spark.yarn.principal) config. Not sure if we 
need to do this now. Let me know your opinion @vanzin @tgravescs 
    
    sparkConf.get(PRINCIPAL).flatMap { renewer =>
          val creds = new Credentials()
          hadoopFSsToAccess(hadoopConf, sparkConf).foreach { dst =>
            val dstFs = dst.getFileSystem(hadoopConf)
            dstFs.addDelegationTokens(renewer, creds)
          }


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to