tgravescs commented on a change in pull request #31761:
URL: https://github.com/apache/spark/pull/31761#discussion_r589658117
##########
File path: core/src/main/scala/org/apache/spark/internal/config/package.scala
##########
@@ -691,6 +691,15 @@ package object config {
.toSequence
.createWithDefault(Nil)
+ private[spark] val KERBEROS_FILESYSTEM_RENEWAL_EXCLUDE =
+ ConfigBuilder("spark.kerberos.renewal.exclude.hadoopFileSystems")
+ .doc("The list of Hadoop filesystem URLs whose hosts will be excluded
from " +
Review comment:
Right, the point I was getting at is that we only renew the ones in
spark.kerberos.access.hadoopFileSystems + defaultFs + staging fs. So I assume
its basically defaultFs then? I was kind of wanting to make sure it wasn't
something silly that they specified in both. It seems odd for the defaultFs
since this is the Hadoop filesystem which supports renewal. I can obviously
see this if its a separate name node but it would require you to specify it in
the config for us to try to renew. So I'm assuming this is perhaps an odd
config that you can't renew on the current default fs. Just seems a bit odd
case so wanted to make sure I wasn't missing something.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]