tgravescs commented on a change in pull request #31761:
URL: https://github.com/apache/spark/pull/31761#discussion_r593257557



##########
File path: core/src/main/scala/org/apache/spark/internal/config/package.scala
##########
@@ -691,6 +691,15 @@ package object config {
     .toSequence
     .createWithDefault(Nil)
 
+  private[spark] val KERBEROS_FILESYSTEM_RENEWAL_EXCLUDE =
+    ConfigBuilder("spark.kerberos.renewal.exclude.hadoopFileSystems")
+      .doc("The list of Hadoop filesystem URLs whose hosts will be excluded 
from " +

Review comment:
       I was simply trying to clarify the exact case being hit here and make 
sure there wasn't an alternate solution and make sure the docs are clear.   The 
specific case being hit could affect the solution.
   
   I think there are comments on this review that are very confusing, thus why 
I wanted to clarification. Some indicate that Spark doesn't get initial tokens, 
others saying in this case the tokens were already acquired, etc.
   
   In the end my comment ends up being I think we should update the security.md 
doc to mention renewal in the kerberos section for Hadoop filesystems to help 
explain to users.
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to