Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/19140#discussion_r137334603
--- Diff:
core/src/main/scala/org/apache/spark/deploy/security/HadoopFSDelegationTokenProvider.scala
---
@@ -103,15 +103,17 @@ private[deploy] class
HadoopFSDelegationTokenProvider(fileSystems: Configuration
private def getTokenRenewalInterval(
hadoopConf: Configuration,
- filesystems: Set[FileSystem]): Option[Long] = {
+ filesystems: Set[FileSystem],
+ creds:Credentials): Option[Long] = {
// We cannot use the tokens generated with renewer yarn. Trying to
renew
// those will fail with an access control issue. So create new tokens
with the logged in
// user as renewer.
- val creds = fetchDelegationTokens(
+ val fetchCreds = fetchDelegationTokens(
--- End diff --
I'm pretty sure Mesos is not currently hooked up to the principal / keytab
stuff. It just picks up the initial delegation token set, and when those
expire, things stop working.
Adding the check back here is the right thing; it shouldn't affect Mesos
when it adds support for principal / keytab (or if it does, it can be fixed at
that time).
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]