Github user harishreedharan commented on a diff in the pull request:
https://github.com/apache/spark/pull/4688#discussion_r27239682
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtil.scala ---
@@ -82,6 +103,244 @@ class YarnSparkHadoopUtil extends SparkHadoopUtil {
if (credentials != null) credentials.getSecretKey(new Text(key)) else
null
}
+ /*
+ * The following methods are primarily meant to make sure long-running
apps like Spark
+ * Streaming apps can run without interruption while writing to secure
HDFS. The
+ * scheduleLoginFromKeytab method is called on the driver when the
+ * CoarseGrainedScheduledBackend starts up. This method wakes up a
thread that logs into the KDC
+ * once 75% of the expiry time of the original delegation tokens used
for the container
+ * has elapsed. It then creates new delegation tokens and writes them to
HDFS in a
+ * pre-specified location - the prefix of which is specified in the
sparkConf by
+ * spark.yarn.credentials.file (so the file(s) would be named c-1, c-2
etc. - each update goes
+ * to a new file, with a monotonically increasing suffix). After this,
the credentials are
--- End diff --
Right now, I am not removing it, but I think a good way of doing this would
be to leave something like 5 files and remove everything else. Does that make
sense? This is to ensure any executors reading files don't have the files
disappearing under them.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]