Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112039734
--- Diff:
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala
---
@@ -174,6 +177,24 @@ private[spark] class CoarseGrainedExecutorBackend(
private[spark] object CoarseGrainedExecutorBackend extends Logging {
+ private def addMesosDelegationTokens(driverConf: SparkConf) {
+ val value = driverConf.get("spark.mesos.kerberos.userCredentials")
+ val tokens = DatatypeConverter.parseBase64Binary(value)
+
+ logDebug(s"Found delegation tokens of ${tokens.length} bytes.")
+
+ // Use tokens for HDFS login.
+ val hadoopConf = SparkHadoopUtil.get.newConfiguration(driverConf)
+ hadoopConf.set("hadoop.security.authentication", "Token")
+ UserGroupInformation.setConfiguration(hadoopConf)
+
+ // Decode tokens and add them to the current user's credentials.
+ val creds = UserGroupInformation.getCurrentUser.getCredentials
+ val tokensBuf = new java.io.ByteArrayInputStream(tokens)
+ creds.readTokenStorageStream(new java.io.DataInputStream(tokensBuf))
+ UserGroupInformation.getCurrentUser.addCredentials(creds)
+ }
--- End diff --
That would be great, thanks - I am trying to get a sense of how renewal
works in this case.
There some ongoing work for having different ways to update credentials;
and I was hoping this (from what I understand, not using hdfs but direct rpc)
would be another way to do it - allowing for multiple common implementations
which can be leveraged across all schedulers depending on requirements.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]