tgravescs commented on a change in pull request #31761:
URL: https://github.com/apache/spark/pull/31761#discussion_r590905476
##########
File path: core/src/main/scala/org/apache/spark/internal/config/package.scala
##########
@@ -691,6 +691,15 @@ package object config {
.toSequence
.createWithDefault(Nil)
+ private[spark] val KERBEROS_FILESYSTEM_RENEWAL_EXCLUDE =
+ ConfigBuilder("spark.kerberos.renewal.exclude.hadoopFileSystems")
+ .doc("The list of Hadoop filesystem URLs whose hosts will be excluded
from " +
Review comment:
yes its true like I mentioned above. I'm not against it just want to
understand and make sure this config makes sense. For instance instead of
having this config we could have one that says don't include defaultFs or
stageFs or both in renewal. I think those are really the only 2 filesystems
that make sense to put into this config. Anything else doesn't get renewed
unless its in spark.kerberos.access.hadoopFileSystems and in that case just
don't include it in spark.kerberos.access.hadoopFileSystems. The nice things
about this approach is its generic if things change in the future. The downside
is I have to know namenode path for the defaultFs and staging fs to add to this
config.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]