viirya commented on a change in pull request #31761:
URL: https://github.com/apache/spark/pull/31761#discussion_r590889740
##########
File path:
core/src/main/scala/org/apache/spark/deploy/security/HadoopFSDelegationTokenProvider.scala
##########
@@ -99,11 +100,24 @@ private[deploy] class HadoopFSDelegationTokenProvider
private def fetchDelegationTokens(
renewer: String,
filesystems: Set[FileSystem],
- creds: Credentials): Credentials = {
+ creds: Credentials,
+ hadoopConf: Configuration,
+ sparkConf: SparkConf): Credentials = {
+
+ // The hosts on which the file systems to be excluded from token renewal
+ val fsToExclude = sparkConf.get(KERBEROS_FILESYSTEM_RENEWAL_EXCLUDE)
+ .map(new Path(_).getFileSystem(hadoopConf).getUri.getHost)
+ .toSet
filesystems.foreach { fs =>
- logInfo(s"getting token for: $fs with renewer $renewer")
- fs.addDelegationTokens(renewer, creds)
+ if (fsToExclude.contains(fs.getUri.getHost)) {
+ // RM skips renewing token with empty renewer
Review comment:
We only test it under YARN and verify it works to skip the renewal from
YARN if we give an empty renewer. I'm not sure if other resource scheduler
follows this behavior. So I can just document the config to mention it is known
to work for YARN.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]