lirui-apache commented on a change in pull request #15131:
URL: https://github.com/apache/flink/pull/15131#discussion_r598454995
##########
File path:
flink-yarn/src/main/java/org/apache/flink/yarn/configuration/YarnConfigOptions.java
##########
@@ -344,6 +344,13 @@
.withDescription(
"A comma-separated list of additional
Kerberos-secured Hadoop filesystems Flink is going to access. For example,
yarn.security.kerberos.additionalFileSystems=hdfs://namenode2:9002,hdfs://namenode3:9003.
The client submitting to YARN needs to have access to these file systems to
retrieve the security tokens.");
+ public static final ConfigOption<Boolean> YARN_SECURITY_ENABLED =
Review comment:
Delegation tokens are usually used for a distributed job to authenticate
with a Hadoop-based service like Hive or HBase. But it should be orthogonal to
the resource management framework like yarn, mesos, or k8s. Spark
[doc](https://spark.apache.org/docs/latest/security.html#kerberos) indicates
that delegation tokens are supported for yarn and mesos. And its delegation
token implementations (e.g. `HadoopDelegationTokenManager`,
`HadoopDelegationTokenProvider`) are in spark-core.
Actually other projects may also implement their own delegation token
mechanisms like what
[kafka](https://docs.confluent.io/platform/current/kafka/authentication_sasl/authentication_sasl_delegation.html)
does. But I guess that exceeds the scope of this ticket.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]