tgravescs commented on a change in pull request #28336:
URL: https://github.com/apache/spark/pull/28336#discussion_r420790346
##########
File path:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
##########
@@ -867,6 +868,20 @@ object ApplicationMaster extends Logging {
val originalCreds =
UserGroupInformation.getCurrentUser().getCredentials()
SparkHadoopUtil.get.loginUserFromKeytab(principal,
sparkConf.get(KEYTAB).orNull)
val newUGI = UserGroupInformation.getCurrentUser()
+
+ if (master.isClusterMode) {
+ // Set the context class loader so that the token manager has access
to jars
+ // distributed by the user.
+ Utils.withContextClassLoader(master.userClassLoader) {
+ // Re-obtain delegation tokens, as they might be outdated as of
now. Add the fresh
+ // tokens on top of the original user's credentials (overwrite).
+ // This is only needed in cluster mode, because in client mode, AM
will soon retrieve
+ // the latest tokens from the driver.
+ val credentialManager = new
HadoopDelegationTokenManager(sparkConf, yarnConf, null)
Review comment:
right so my question is can we get rid of line 869 above where it
SparkHadoopUtil.get.loginUserFromKeytab(principal,
sparkConf.get(KEYTAB).orNull)
if it is already done inside obtainDelegationTokens. it looks like you can
from the code but I would want to test to verify.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]