HeartSaVioR commented on a change in pull request #28336:
URL: https://github.com/apache/spark/pull/28336#discussion_r421172010



##########
File path: 
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
##########
@@ -863,10 +864,20 @@ object ApplicationMaster extends Logging {
     val ugi = sparkConf.get(PRINCIPAL) match {
       // We only need to log in with the keytab in cluster mode. In client 
mode, the driver
       // handles the user keytab.
-      case Some(principal) if amArgs.userClass != null =>
+      case Some(principal) if master.isClusterMode =>
         val originalCreds = 
UserGroupInformation.getCurrentUser().getCredentials()
         SparkHadoopUtil.get.loginUserFromKeytab(principal, 
sparkConf.get(KEYTAB).orNull)
         val newUGI = UserGroupInformation.getCurrentUser()
+
+        // Set the context class loader so that the token manager has access 
to jars
+        // distributed by the user.
+        Utils.withContextClassLoader(master.userClassLoader) {
+          // Re-obtain delegation tokens, as they might be outdated as of now. 
Add the fresh
+          // tokens on top of the original user's credentials (overwrite).
+          val credentialManager = new HadoopDelegationTokenManager(sparkConf, 
yarnConf, null)
+          credentialManager.obtainDelegationTokens(originalCreds)

Review comment:
       Technically yes, but I didn't because I have no idea how to manually 
check whether the tokens are expired or not. I think that's not exposed in 
common interface for token.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to