Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112045391
--- Diff:
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala
---
@@ -174,6 +177,24 @@ private[spark] class CoarseGrainedExecutorBackend(
private[spark] object CoarseGrainedExecutorBackend extends Logging {
+ private def addMesosDelegationTokens(driverConf: SparkConf) {
+ val value = driverConf.get("spark.mesos.kerberos.userCredentials")
+ val tokens = DatatypeConverter.parseBase64Binary(value)
+
+ logDebug(s"Found delegation tokens of ${tokens.length} bytes.")
+
+ // Use tokens for HDFS login.
+ val hadoopConf = SparkHadoopUtil.get.newConfiguration(driverConf)
+ hadoopConf.set("hadoop.security.authentication", "Token")
+ UserGroupInformation.setConfiguration(hadoopConf)
+
+ // Decode tokens and add them to the current user's credentials.
+ val creds = UserGroupInformation.getCurrentUser.getCredentials
+ val tokensBuf = new java.io.ByteArrayInputStream(tokens)
+ creds.readTokenStorageStream(new java.io.DataInputStream(tokensBuf))
+ UserGroupInformation.getCurrentUser.addCredentials(creds)
+ }
--- End diff --
To elaborate; one potential use of it is to do token acquisition and token
distribution without needing to provide principal/keytab to spark application
(other than keeping track of last credential update to account for AM failures).
This is WIP though, but if mesos approach is different from yarn - that
would be a great way to iterate on the latter aspect (token distribution); and
ensure it is extensible enough for future requirements/implementations.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]