Github user ArtRand commented on a diff in the pull request:
https://github.com/apache/spark/pull/19272#discussion_r149549953
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
---
@@ -213,6 +216,14 @@ private[spark] class
MesosCoarseGrainedSchedulerBackend(
sc.conf.getOption("spark.mesos.driver.frameworkId").map(_ + suffix)
)
+ // check that the credentials are defined, even though it's likely
that auth would have failed
+ // already if you've made it this far, then start the token renewer
+ if (hadoopDelegationTokens.isDefined) {
--- End diff --
I agree that I shouldn't need to use the conditional
`hadoopDelegationTokens.isDefined`, however there will need to be some check
(`UserGroupInformation.isSecurityEnabled` or similar) to pass the
`driverEndpoint` to the renewer/manager here. When the initial tokens are
generated `driverEndpoint` is still `None` because `start()` hasn't been called
yet.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]