dongjoon-hyun commented on a change in pull request #34635:
URL: https://github.com/apache/spark/pull/34635#discussion_r751876900
##########
File path:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
##########
@@ -340,6 +340,30 @@ private[spark] class Client(
amContainer.setTokens(ByteBuffer.wrap(serializedCreds))
}
+ /**
+ * Set configurations sent from AM to RM for renewing delegation tokens.
+ */
+ private def setTokenConf(amContainer: ContainerLaunchContext): Unit = {
+ // SPARK-37205: this regex is used to grep a list of configurations and
send them to YARN RM
+ // for fetching delegation tokens. See YARN-5910 for more details.
+ // The feature is only supported in Hadoop 3.x and up, hence the check
below.
+ val regex = sparkConf.get(config.AM_SEND_TOKEN_CONF)
+ if (regex != null && regex.nonEmpty && VersionUtils.isHadoop3) {
+ logInfo(s"Processing token conf (spark.yarn.am.sendTokenConf) with regex
$regex")
+ val dob = new DataOutputBuffer();
+ val copy = new Configuration(false);
+ copy.clear();
+ hadoopConf.asScala.foreach { entry =>
+ if (entry.getKey.matches(regex)) {
+ copy.set(entry.getKey, entry.getValue)
+ logInfo(s"Captured key: ${entry.getKey} -> value: ${entry.getValue}")
+ }
+ }
+ copy.write(dob);
+ amContainer.setTokensConf(ByteBuffer.wrap(dob.getData))
Review comment:
Just a question. The compilation works with Hadoop 2.7, right?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]