zhouyifan279 commented on a change in pull request #1074:
URL: https://github.com/apache/incubator-kyuubi/pull/1074#discussion_r706739149



##########
File path: 
kyuubi-server/src/main/scala/org/apache/kyuubi/client/KyuubiSyncThriftClientHandler.scala
##########
@@ -0,0 +1,63 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *    http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kyuubi.client
+
+import java.lang.reflect.{InvocationHandler, InvocationTargetException, Method}
+
+import org.apache.kyuubi.credentials.HadoopCredentialsManager
+import org.apache.kyuubi.session.SessionHandle
+
+class KyuubiSyncThriftClientHandler(
+    credentialsManager: HadoopCredentialsManager,
+    client: KyuubiSyncThriftClient,
+    sessionId: String,
+    appUser: String) extends InvocationHandler {
+
+  override def invoke(proxy: Any, method: Method, args: Array[AnyRef]): AnyRef 
= {
+    credentialsManager.sendCredentialsIfNeeded(

Review comment:
       Just think of that there is a problem in another case:  
   User designates a HDFS service or Hive metastore server in JDBC url or in 
user scope configuration, which is  different from service address in kyuubi 
server's configuration .
   
   It should be fine to `DFSClient` because we update tokens by alias and HDFS 
delegation token's alias is the NameNode address. We can skip updating if 
engine does not have an alias.
   
   But it is a problem to `HiveMetaStoreClient` because Hive delegation token's 
alias is fixed to "hive.server2.delegation.token" in spark. The initial correct 
token will be updated to a token for another server.
   
   To fix this problem, I think we should add the following procedure:
   
   1.  At kyuubi server side, set Hive delegation token's alias to Hive 
metastore uris.
   2. At engine side, before updating the token, select the first token sent by 
kyuubi server whose metastore uris is compatible with the uris defined in 
engine HiveConf. Metastore uris are compatible if they have at least one same 
uri.
   3. Add the selected token to engine's credentials with alias 
"hive.server2.delegation.token".




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to