zhouyifan279 commented on a change in pull request #1074: URL: https://github.com/apache/incubator-kyuubi/pull/1074#discussion_r706739149
########## File path: kyuubi-server/src/main/scala/org/apache/kyuubi/client/KyuubiSyncThriftClientHandler.scala ########## @@ -0,0 +1,63 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.kyuubi.client + +import java.lang.reflect.{InvocationHandler, InvocationTargetException, Method} + +import org.apache.kyuubi.credentials.HadoopCredentialsManager +import org.apache.kyuubi.session.SessionHandle + +class KyuubiSyncThriftClientHandler( + credentialsManager: HadoopCredentialsManager, + client: KyuubiSyncThriftClient, + sessionId: String, + appUser: String) extends InvocationHandler { + + override def invoke(proxy: Any, method: Method, args: Array[AnyRef]): AnyRef = { + credentialsManager.sendCredentialsIfNeeded( Review comment: Just think of that there is a problem in another case: User designates a HDFS service or Hive metastore server different from the one in kyuubi server's configuration in JDBC url or in user scope configuration. It should be fine to `DFSClient` because we update tokens by alias and HDFS delegation token's alias is the NameNode address. We will just add some tokens not needed. But it is a problem to `HiveMetaStoreClient` because Hive delegation token's alias is fixed to "hive.server2.delegation.token" in spark. The initial correct token will be updated to a token for another server. To fix this problem, I think we should add the following procedure: 1. At kyuubi server side, set Hive delegation token's service name to Hive metastore uris. 2. At engine side, before update the token, select the first token whose metastore uris is compatible with the uris defined in HiveConf. Metastore uris are compatible if they have at least one same uri. 3. Set the token's service to "" and add it to engine's credentials. (Hive metastore delegation token's service in engine has to be "" as `HiveMetaStoreClient` selects the first token whose service is "" and kind is "HIVE_DELEGATION_TOKEN" to talk to server.) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
