[
https://issues.apache.org/jira/browse/FLINK-27191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17521188#comment-17521188
]
Yuan Zhu commented on FLINK-27191:
----------------------------------
But when job starts running, Hive source need to connect hdfs/hiveMetaStore
with specific principal. IMHO, TM will install HadoopModule when it starts. If
we try to connect hdfs/hiveMetaStore with other principal, the configuration
will conflict with TM's. How to avoid it?
> Support multi kerberos-enabled Hive clusters
> ---------------------------------------------
>
> Key: FLINK-27191
> URL: https://issues.apache.org/jira/browse/FLINK-27191
> Project: Flink
> Issue Type: Improvement
> Components: Connectors / Hive
> Reporter: luoyuxia
> Priority: Major
> Fix For: 1.16.0
>
>
> Currently, to access kerberos-enabled Hive cluster, users are expected to add
> key/secret in flink-conf. But it can only access one Hive cluster in one
> Flink cluster, we are also expected to support multi kerberos-enabled Hive
> clusters in one Flink cluster.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)