jiaoqingbo commented on code in PR #2286:
URL: https://github.com/apache/incubator-kyuubi/pull/2286#discussion_r844947696
##########
kyuubi-server/src/main/scala/org/apache/kyuubi/credentials/HadoopCredentialsManager.scala:
##########
@@ -183,6 +184,10 @@ class HadoopCredentialsManager private (name: String)
extends AbstractService(na
warn(
s"Failed to send new credentials to SQL engine through session
$sessionId",
exception)
+ if (DELEGATION_TOKEN_IS_NOT_SUPPORTED.equals(exception.getMessage)) {
+ stop()
Review Comment:
> It is an overkill to stop `HadoopCredentialManager` only because one of
Kyuubi Engines does not support token renewal. This will result in token
expiration of other Kyuubi Engines.
>
> I think we should handle this more carefully in one of these two ways:
>
> 1. At Kyuubi Server side, decide whether to `sendCredentials` according to
`kyuubi.engine.type`.
> 2. At Kyuubi Engine side, `TFrontendService#RenewDelegationToken` returns
SUCCESS_STATUS by default.
>
> By the first way, `HadoopCredentialManager` can maintain fewer
userCredentials as currently only Spark Engine requires token renewal.
>
> The second way conforms better to Kyuubi's architecture, that is, keep
engine specific codes at engine side.
You are correct, I thought kyuubi server can only support one engine at a
time.
Method 1 might work, method 2 would default to thinking that Engine supports
it but actually just make useless rpc calls
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]