viirya commented on a change in pull request #31761:
URL: https://github.com/apache/spark/pull/31761#discussion_r594749886
##########
File path: docs/security.md
##########
@@ -838,6 +838,17 @@ The following options provides finer-grained control for
this feature:
</td>
<td>3.0.0</td>
</tr>
+<tr>
+ <td><code>spark.kerberos.renewal.exclude.hadoopFileSystems</code></td>
+ <td>(none)</td>
+ <td>
+ A comma-separated list of Hadoop filesystems for whose hosts will be
excluded from from delegation
+ token renewal at resource scheduler. For example,
<code>spark.kerberos.renewal.exclude.hadoopFileSystems=hdfs://nn1.com:8032,
+ hdfs://nn2.com:8032</code>. This is known to work under YARN for now, so
YARN Resource Manager won't renew tokens for the application.
+ Note that as resource scheduler does not renew token, the application
might not be long running once the token expires.
Review comment:
Okay, this sounds a good point to me. I think I should not change the
behavior of `getTokenRenewalInterval` because it is not related to the issue
here. The renew call in `getTokenRenewalInterval` is just used for obtaining
renewal interval for all FS tokens.
Throwing exception during individual `renew` call will be ignored. The
actual renewal internal will be the minimum among all internals. But yes this
could be a behavior change if the token with empty renewer is actually the
minimum one and we cannot get it now because of the exception.
Let me restore the behavior of `getTokenRenewalInterval` to make it safer.
I assume this config is only for YARN-specific behavior. I documented it
explicitly about YARN in the config doc/security.md. Is any other thing I
should do for it?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]