HeartSaVioR commented on a change in pull request #27191: [SPARK-30495][SS]
Consider spark.security.credentials.kafka.enabled and cluster configuration
when checking latest delegation token
URL: https://github.com/apache/spark/pull/27191#discussion_r366539356
##########
File path:
external/kafka-0-10-token-provider/src/main/scala/org/apache/spark/kafka010/KafkaTokenUtil.scala
##########
@@ -291,13 +291,14 @@ private[spark] object KafkaTokenUtil extends Logging {
}
def isConnectorUsingCurrentToken(
+ sparkConf: SparkConf,
params: ju.Map[String, Object],
clusterConfig: Option[KafkaTokenClusterConf]): Boolean = {
- if (params.containsKey(SaslConfigs.SASL_JAAS_CONFIG)) {
+ if (sparkConf.getBoolean("spark.security.credentials.kafka.enabled", true)
&&
Review comment:
Requiring HadoopDelegationTokenManager instance seems to be overkill given
we only need `isServiceEnabled`.
In other words, `isServiceEnabled` doesn't seem to be necessarily to be the
method of `HadoopDelegationTokenManager` class. I can't find any usage of
overriding though it's scope is protected. IMO it would be better we can move
the `isServiceEnabled` method to HadoopDelegationTokenManager `object` (with
requiring spark conf as parameter) and reuse.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]