[ https://issues.apache.org/jira/browse/SPARK-21918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16155047#comment-16155047 ]
Marco Gaido commented on SPARK-21918: ------------------------------------- What I meant is that if we want to support doAs, we shouldn't just support it for DDL operations, but also for all DML & DQL. Your fix I am pretty sure won't affect the DML & DQL behavior, ie. we would support the doAs only for DDL operations with your change. This means that there would be a hybrid situation: for DDL we'd have doAs working, for DML & DQL no. This is not a desirable condition. PS For my sake of curiosity, may I ask you how you tested that your DDL commands were run using the session user? Thanks. > HiveClient shouldn't share Hive object between different thread > --------------------------------------------------------------- > > Key: SPARK-21918 > URL: https://issues.apache.org/jira/browse/SPARK-21918 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.2.0 > Reporter: Hu Liu, > > I'm testing the spark thrift server and found that all the DDL statements are > run by user hive even if hive.server2.enable.doAs=true > The root cause is that Hive object is shared between different thread in > HiveClientImpl > {code:java} > private def client: Hive = { > if (clientLoader.cachedHive != null) { > clientLoader.cachedHive.asInstanceOf[Hive] > } else { > val c = Hive.get(conf) > clientLoader.cachedHive = c > c > } > } > {code} > But in impersonation mode, we should just share the Hive object inside the > thread so that the metastore client in Hive could be associated with right > user. > we can pass the Hive object of parent thread to child thread when running > the sql to fix it > I have already had a initial patch for review and I'm glad to work on it if > anyone could assign it to me. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org