Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/19140#discussion_r137264045
--- Diff:
core/src/main/scala/org/apache/spark/deploy/security/HadoopFSDelegationTokenProvider.scala
---
@@ -103,15 +103,17 @@ private[deploy] class
HadoopFSDelegationTokenProvider(fileSystems: Configuration
private def getTokenRenewalInterval(
hadoopConf: Configuration,
- filesystems: Set[FileSystem]): Option[Long] = {
+ filesystems: Set[FileSystem],
+ creds:Credentials): Option[Long] = {
// We cannot use the tokens generated with renewer yarn. Trying to
renew
// those will fail with an access control issue. So create new tokens
with the logged in
// user as renewer.
- val creds = fetchDelegationTokens(
+ val fetchCreds = fetchDelegationTokens(
--- End diff --
I'd prefer to not call it if we don't need to so as long as adding the
config back doesn't mess with the mesos side of things (since this is now
common code) I think that would be good. the PRINCIPAL config is yarn specific
config, but looking at SparkSubmit it appears to be using for mesos as well.
@vanzin do you happen to know if mesos is using that as well, I haven't
kept up with mesos kerberos support. so not sure if more is going to happen
there.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]