Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4688#discussion_r25532214
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtil.scala ---
@@ -82,6 +93,102 @@ class YarnSparkHadoopUtil extends SparkHadoopUtil {
if (credentials != null) credentials.getSecretKey(new Text(key)) else
null
}
+ override def setPrincipalAndKeytabForLogin(principal: String, keytab:
String): Unit = {
+ loginPrincipal = Option(principal)
+ keytabFile = Option(keytab)
+ }
+
+ private[spark] override def scheduleLoginFromKeytab(
+ callback: (SerializableBuffer) => Unit): Unit = {
+
+ loginPrincipal match {
+ case Some(principal) =>
+ val keytab = keytabFile.get
+ val remoteFs = FileSystem.get(conf)
+ val remoteKeytabPath = new Path(
+ remoteFs.getHomeDirectory, System.getenv("SPARK_STAGING_DIR") +
Path.SEPARATOR + keytab)
+ val localFS = FileSystem.getLocal(conf)
+ // At this point, SparkEnv is likely no initialized, so create a
dir, put the keytab there.
+ val tempDir = Utils.createTempDir()
+ val localURI = new URI(tempDir.getAbsolutePath + Path.SEPARATOR +
keytab)
+ val qualifiedURI = new URI(localFS.makeQualified(new
Path(localURI)).toString)
+ FileUtil.copy(
--- End diff --
I've heard that it's not possible to distribute files using the distributed
cache after containers have launched.
But I think Tom's point is that you should distribute the keytab to the AM
using the distributed cache. My comment above is valid for the new HDFS
delegation tokens (cannot use the cache), but the initial keytab distribution
can.
(Guess I need to read this code again.)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]