Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/19140#discussion_r137152903
--- Diff:
core/src/main/scala/org/apache/spark/deploy/security/HadoopFSDelegationTokenProvider.scala
---
@@ -103,15 +103,17 @@ private[deploy] class
HadoopFSDelegationTokenProvider(fileSystems: Configuration
private def getTokenRenewalInterval(
hadoopConf: Configuration,
- filesystems: Set[FileSystem]): Option[Long] = {
+ filesystems: Set[FileSystem],
+ creds:Credentials): Option[Long] = {
// We cannot use the tokens generated with renewer yarn. Trying to
renew
// those will fail with an access control issue. So create new tokens
with the logged in
// user as renewer.
- val creds = fetchDelegationTokens(
+ val fetchCreds = fetchDelegationTokens(
--- End diff --
That code was in `getTokenRenewalInterval`; that call is only needed when
principal and keytab are provided, so adding the code back should be ok. It
shouldn't cause any issues if it's not there, though, aside from a wasted round
trip to the NNs.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]