[jira] [Created] (HIVE-27197) Iceberg: Support Iceberg version travel by reference name
zhangbutao created HIVE-27197: - Summary: Iceberg: Support Iceberg version travel by reference name Key: HIVE-27197 URL: https://issues.apache.org/jira/browse/HIVE-27197 Project: Hive Issue Type: Improvement Components: Iceberg integration Reporter: zhangbutao This ticket is inspired by https://github.com/apache/iceberg/pull/6575 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27141) Iceberg: Add more iceberg table metadata
zhangbutao created HIVE-27141: - Summary: Iceberg: Add more iceberg table metadata Key: HIVE-27141 URL: https://issues.apache.org/jira/browse/HIVE-27141 Project: Hive Issue Type: Improvement Components: Iceberg integration Affects Versions: 4.0.0-alpha-2 Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26839) Upgrade Iceberg from 1.0.0 to 1.1.0
zhangbutao created HIVE-26839: - Summary: Upgrade Iceberg from 1.0.0 to 1.1.0 Key: HIVE-26839 URL: https://issues.apache.org/jira/browse/HIVE-26839 Project: Hive Issue Type: Improvement Reporter: zhangbutao 1) update iceberg version to 1.1.0 2) fix some removed interface methods. e.g https://github.com/apache/iceberg/pull/5771 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26693) HS2 can not read/write hive_catalog iceberg table created by other engines
zhangbutao created HIVE-26693: - Summary: HS2 can not read/write hive_catalog iceberg table created by other engines Key: HIVE-26693 URL: https://issues.apache.org/jira/browse/HIVE-26693 Project: Hive Issue Type: Improvement Components: HiveServer2, StorageHandler Affects Versions: 4.0.0-alpha-2 Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26675) Update Iceberg from 0.14.1 to 1.0.0
zhangbutao created HIVE-26675: - Summary: Update Iceberg from 0.14.1 to 1.0.0 Key: HIVE-26675 URL: https://issues.apache.org/jira/browse/HIVE-26675 Project: Hive Issue Type: Improvement Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26551) Support CREATE TABLE LIKE FILE for ORC
zhangbutao created HIVE-26551: - Summary: Support CREATE TABLE LIKE FILE for ORC Key: HIVE-26551 URL: https://issues.apache.org/jira/browse/HIVE-26551 Project: Hive Issue Type: New Feature Components: HiveServer2 Affects Versions: 4.0.0-alpha-2 Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26503) Hive JDBC Storage Handler: Failed to create jdbc table with hive.sql.query
zhangbutao created HIVE-26503: - Summary: Hive JDBC Storage Handler: Failed to create jdbc table with hive.sql.query Key: HIVE-26503 URL: https://issues.apache.org/jira/browse/HIVE-26503 Project: Hive Issue Type: Bug Components: JDBC storage handler Affects Versions: 4.0.0-alpha-1, 4.0.0-alpha-2 Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26466) NullPointerException on HiveQueryLifeTimeHook:checkAndRollbackCTAS
zhangbutao created HIVE-26466: - Summary: NullPointerException on HiveQueryLifeTimeHook:checkAndRollbackCTAS Key: HIVE-26466 URL: https://issues.apache.org/jira/browse/HIVE-26466 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 4.0.0-alpha-2 Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26459) ReduceRecordProcessor: move to using a timeout version of waitForAllInputsReady(TEZ-3302)
zhangbutao created HIVE-26459: - Summary: ReduceRecordProcessor: move to using a timeout version of waitForAllInputsReady(TEZ-3302) Key: HIVE-26459 URL: https://issues.apache.org/jira/browse/HIVE-26459 Project: Hive Issue Type: Improvement Components: HiveServer2 Affects Versions: 4.0.0-alpha-2 Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-26345) SQLOperation class output real exception message to jdbc client
zhangbutao created HIVE-26345: - Summary: SQLOperation class output real exception message to jdbc client Key: HIVE-26345 URL: https://issues.apache.org/jira/browse/HIVE-26345 Project: Hive Issue Type: Improvement Components: HiveServer2 Affects Versions: 4.0.0-alpha-2 Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (HIVE-26323) Expose real exception information to the client in JdbcSerDe.java
zhangbutao created HIVE-26323: - Summary: Expose real exception information to the client in JdbcSerDe.java Key: HIVE-26323 URL: https://issues.apache.org/jira/browse/HIVE-26323 Project: Hive Issue Type: Bug Components: JDBC storage handler Affects Versions: 4.0.0-alpha-2 Reporter: zhangbutao Method *_initialize_* in JdbcSerDe.java, always return the same exception massage to the client no matter what problems happen. {code:java} } catch (Exception e) { throw new SerDeException("Caught exception while initializing the SqlSerDe", e); } {code} We should expose real execption massage to the client. This is a regression from HIVE-24560. Step to repro: 1. create a jdbc table using incorrect mysql passwd or using incorrect mysql host: {code:java} CREATE EXTERNAL TABLE jdbc_testtbl ( id bigint ) STORED BY 'org.apache.hive.storage.jdbc.JdbcStorageHandler' TBLPROPERTIES ( "hive.sql.database.type" = "MYSQL", "hive.sql.jdbc.driver" = "com.mysql.jdbc.Driver", "hive.sql.jdbc.url" = "jdbc:mysql://localhost:3306/testdb", "hive.sql.dbcp.username" = "root", "hive.sql.dbcp.password" = "password", "hive.sql.table" = "mysqltbl", "hive.sql.dbcp.maxActive" = "1" ); {code} 2. beeline client always display same execption massage no matter incoorect mysq passwd or incorrect host: {code:java} INFO : Starting task [Stage-0:DDL] in serial mode ERROR : Failed org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: MetaException(message:org.apache.hadoop.hive.serde2.SerDeException Caught exception while initializing the SqlSerDe) at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:1343) ~[hive-exec-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:1348) ~[hive-exec-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at org.apache.hadoop.hive.ql.ddl.table.create.CreateTableOperation.createTableNonReplaceMode(CreateTableOperation.java:141) ~[hive-exec-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at org.apache.hadoop.hive.ql.ddl.table.create.CreateTableOperation.execute(CreateTableOperation.java:99) ~[hive-exec-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at org.apache.hadoop.hive.ql.ddl.DDLTask.execute(DDLTask.java:84) ~[hive-exec-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:212) ~[hive-exec-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105) ~[hive-exec-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:354) ~[hive-exec-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:327) ~[hive-exec-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:244) ~[hive-exec-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at org.apache.hadoop.hive.ql.Executor.execute(Executor.java:105) ~[hive-exec-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:343) ~[hive-exec-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:205) ~[hive-exec-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:154) ~[hive-exec-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at org.apache.hadoop.hive.ql.Driver.run(Driver.java:149) ~[hive-exec-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:185) ~[hive-exec-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:233) ~[hive-service-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at org.apache.hive.service.cli.operation.SQLOperation.access$500(SQLOperation.java:88) ~[hive-service-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:336) ~[hive-service-4.0.0-alpha-2-SNAPSHOT.jar:4.0.0-alpha-2-SNAPSHOT] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685) ~[hadoop-common-3.1.0-bc3.2.0.jar:?] at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:356)
[jira] [Created] (HIVE-26299) Drop data connector with argument ifNotExists(true) should not throw NoSuchObjectException
zhangbutao created HIVE-26299: - Summary: Drop data connector with argument ifNotExists(true) should not throw NoSuchObjectException Key: HIVE-26299 URL: https://issues.apache.org/jira/browse/HIVE-26299 Project: Hive Issue Type: Bug Affects Versions: 4.0.0-alpha-2 Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (HIVE-26248) Add data connector authorization on HMS server-side
zhangbutao created HIVE-26248: - Summary: Add data connector authorization on HMS server-side Key: HIVE-26248 URL: https://issues.apache.org/jira/browse/HIVE-26248 Project: Hive Issue Type: Sub-task Affects Versions: 4.0.0-alpha-1, 4.0.0-alpha-2 Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (HIVE-26247) Filter out results 'show connectors' on HMS server-side
zhangbutao created HIVE-26247: - Summary: Filter out results 'show connectors' on HMS server-side Key: HIVE-26247 URL: https://issues.apache.org/jira/browse/HIVE-26247 Project: Hive Issue Type: Sub-task Affects Versions: 4.0.0-alpha-1, 4.0.0-alpha-2 Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (HIVE-26246) Filter out results 'show connectors' on HMS client-side
zhangbutao created HIVE-26246: - Summary: Filter out results 'show connectors' on HMS client-side Key: HIVE-26246 URL: https://issues.apache.org/jira/browse/HIVE-26246 Project: Hive Issue Type: Sub-task Affects Versions: 4.0.0-alpha-1, 4.0.0-alpha-2 Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (HIVE-26245) Use FilterHooks to filter out results of 'show connectors'
zhangbutao created HIVE-26245: - Summary: Use FilterHooks to filter out results of 'show connectors' Key: HIVE-26245 URL: https://issues.apache.org/jira/browse/HIVE-26245 Project: Hive Issue Type: Improvement Components: Standalone Metastore Affects Versions: 4.0.0-alpha-1, 4.0.0-alpha-2 Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (HIVE-26195) Keep Kafka handler naming style consistent with others
zhangbutao created HIVE-26195: - Summary: Keep Kafka handler naming style consistent with others Key: HIVE-26195 URL: https://issues.apache.org/jira/browse/HIVE-26195 Project: Hive Issue Type: Improvement Components: StorageHandler Affects Versions: 4.0.0-alpha-2 Reporter: zhangbutao Keep Kafka handler naming style consistent with others (JDBC, Hbase, Kudu, Druid) -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (HIVE-26192) JDBC data connector queries occur exception at cbo stage
zhangbutao created HIVE-26192: - Summary: JDBC data connector queries occur exception at cbo stage Key: HIVE-26192 URL: https://issues.apache.org/jira/browse/HIVE-26192 Project: Hive Issue Type: Bug Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (HIVE-26180) Change MySQLConnectorProvider driver from mariadb to mysql
zhangbutao created HIVE-26180: - Summary: Change MySQLConnectorProvider driver from mariadb to mysql Key: HIVE-26180 URL: https://issues.apache.org/jira/browse/HIVE-26180 Project: Hive Issue Type: Bug Components: StorageHandler Affects Versions: 4.0.0-alpha-1, 4.0.0-alpha-2 Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (HIVE-26171) HMSHandler get_all_tables method can not retrieve tables from remote database
zhangbutao created HIVE-26171: - Summary: HMSHandler get_all_tables method can not retrieve tables from remote database Key: HIVE-26171 URL: https://issues.apache.org/jira/browse/HIVE-26171 Project: Hive Issue Type: Bug Components: Standalone Metastore Affects Versions: 4.0.0-alpha-1, 4.0.0-alpha-2 Reporter: zhangbutao At present, get_all_tables method in HMSHandler would not get table from remote database. However, other component like presto and some jobs we developed have used this api instead of _get_tables_ which could retrieve all tables both native database and remote database __ . {code:java} // get_all_tables only can get tables from native database public List get_all_tables(final String dbname) throws MetaException {{code} {code:java} // get_tables can get tables from both native and remote database public List get_tables(final String dbname, final String pattern){code} I think we shoud fix get_all_tables to make it retrive tables from remote database. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (HIVE-26170) Code cleanup in jdbc dataconnector
zhangbutao created HIVE-26170: - Summary: Code cleanup in jdbc dataconnector Key: HIVE-26170 URL: https://issues.apache.org/jira/browse/HIVE-26170 Project: Hive Issue Type: Improvement Components: Standalone Metastore Affects Versions: 4.0.0-alpha-2 Reporter: zhangbutao Clean up unused import; Fix incorrect Logger in PostgreSQLConnectorProvider; -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (HIVE-26131) Incorrect OutputFormat when describing jdbc connector table
zhangbutao created HIVE-26131: - Summary: Incorrect OutputFormat when describing jdbc connector table Key: HIVE-26131 URL: https://issues.apache.org/jira/browse/HIVE-26131 Project: Hive Issue Type: Bug Components: JDBC storage handler Affects Versions: 4.0.0-alpha-1, 4.0.0-alpha-2 Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-26130) Incorrect matching of external table when validating NOT NULL constraints
zhangbutao created HIVE-26130: - Summary: Incorrect matching of external table when validating NOT NULL constraints Key: HIVE-26130 URL: https://issues.apache.org/jira/browse/HIVE-26130 Project: Hive Issue Type: Bug Reporter: zhangbutao Assignee: zhangbutao -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-25964) Create iceberg table with ranger authorization failed with storage URI NullPointerException
zhangbutao created HIVE-25964: - Summary: Create iceberg table with ranger authorization failed with storage URI NullPointerException Key: HIVE-25964 URL: https://issues.apache.org/jira/browse/HIVE-25964 Project: Hive Issue Type: Bug Components: Authorization, HiveServer2 Affects Versions: 4.0.0 Reporter: zhangbutao set hive.security.authorization.enabled=true; set hive.security.authorization.managerorg.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizerFactory; create table test_ice(id int) stored by iceberg; {code:java} 2022-02-17T16:32:58,304 ERROR [6bf2c99a-72eb-4608-9189-b64bd59df590 HiveServer2-Handler-Pool: Thread-83] command.CommandAuthorizerV2: Exception occurred while getting the URI from storage handler: null java.lang.NullPointerException: null at java.util.Hashtable.put(Hashtable.java:459) ~[?:1.8.0_112] at java.util.Properties.setProperty(Properties.java:166) ~[?:1.8.0_112] at org.apache.iceberg.mr.hive.IcebergTableUtil.getTable(IcebergTableUtil.java:87) ~[hive-iceberg-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.iceberg.mr.hive.HiveIcebergStorageHandler.getURIForAuth(HiveIcebergStorageHandler.java:372) ~[hive-iceberg-handler-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizerV2.addHivePrivObject(CommandAuthorizerV2.java:197) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizerV2.getHivePrivObjects(CommandAuthorizerV2.java:142) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizerV2.doAuthorization(CommandAuthorizerV2.java:76) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.security.authorization.command.CommandAuthorizer.doAuthorization(CommandAuthorizer.java:58) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.Compiler.authorize(Compiler.java:426) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:112) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:501) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:453) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:417) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:411) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:121) ~[hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:204) ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:267) ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hive.service.cli.operation.Operation.run(Operation.java:281) ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:545) ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:530) ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_112] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_112] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_112] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_112] at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78) ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36) ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63) ~[hive-service-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT] at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_112] at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112] at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685) ~[hadoop-common-3.1.0-bc3.2.0.jar:?] at
[jira] [Created] (HIVE-25828) Remove unused import and method in ParseUtils
zhangbutao created HIVE-25828: - Summary: Remove unused import and method in ParseUtils Key: HIVE-25828 URL: https://issues.apache.org/jira/browse/HIVE-25828 Project: Hive Issue Type: Improvement Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (HIVE-25206) Add primary key for partial metadata script
zhangbutao created HIVE-25206: - Summary: Add primary key for partial metadata script Key: HIVE-25206 URL: https://issues.apache.org/jira/browse/HIVE-25206 Project: Hive Issue Type: Bug Components: Standalone Metastore Affects Versions: 4.0.0 Reporter: zhangbutao {code:java} standalone-metastore/metastore-server/src/main/sql/mysql/hive-schema-4.0.0.mysql.sql {code} Some metadata tables in hive-schema-4.0.0.mysql.sql dont't have primary key. Eg *TXN_COMPONENTS* and *COMPLETED_TXN_COMPONENTS* . This will cause exception when backend mysql set some strict parameters such as '*global pxc_strict_mode='ENFORCING*''. {code:java} Caused by: org.apache.hadoop.hive.metastore.api.MetaException: Unable to clean up java.sql.SQLException: Percona-XtraDB-Cluster prohibits use of DML command on a table (hive4s.txn_components) without an explicit primary key with pxc_strict_mode = ENFORCING or MASTER at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1078) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4187) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4119) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2570) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2731) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2814) at com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1813) at com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1727) at org.apache.hive.com.zaxxer.hikari.pool.ProxyStatement.executeUpdate(ProxyStatement.java:117) at org.apache.hive.com.zaxxer.hikari.pool.HikariProxyStatement.executeUpdate(HikariProxyStatement.java) at org.apache.hadoop.hive.metastore.txn.TxnHandler.cleanupRecords(TxnHandler.java:3962) at org.apache.hadoop.hive.metastore.AcidEventListener.onDropDatabase(AcidEventListener.java:58) at org.apache.hadoop.hive.metastore.MetaStoreListenerNotifier$23.notify(MetaStoreListenerNotifier.java:94) at org.apache.hadoop.hive.metastore.MetaStoreListenerNotifier.notifyEvent(MetaStoreListenerNotifier.java:305) at org.apache.hadoop.hive.metastore.HMSHandler.drop_database_core(HMSHandler.java:1893) at org.apache.hadoop.hive.metastore.HMSHandler.drop_database(HMSHandler.java:1954) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) at com.sun.proxy.$Proxy28.drop_database(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_database.getResult(ThriftHiveMetastore.java:17577) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_database.getResult(ThriftHiveMetastore.java:17556) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:38) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:111) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:107) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:119) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:313) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-25186) Drop database fails with excetion 'Cannot delete or update a parent row: a foreign key constraint fails'
zhangbutao created HIVE-25186: - Summary: Drop database fails with excetion 'Cannot delete or update a parent row: a foreign key constraint fails' Key: HIVE-25186 URL: https://issues.apache.org/jira/browse/HIVE-25186 Project: Hive Issue Type: Bug Components: Standalone Metastore Affects Versions: 4.0.0 Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-23043) Hive TPCDS: mr and tez occur different result about query15
zhangbutao created HIVE-23043: - Summary: Hive TPCDS: mr and tez occur different result about query15 Key: HIVE-23043 URL: https://issues.apache.org/jira/browse/HIVE-23043 Project: Hive Issue Type: Bug Affects Versions: 3.1.1, 3.1.0 Reporter: zhangbutao Hive TPCDS , 1TB orc data (tpcds_bin_partitioned_orc_1000). mr and tez occur different result about query15. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-22951) Fix invalid repo url in pom
zhangbutao created HIVE-22951: - Summary: Fix invalid repo url in pom Key: HIVE-22951 URL: https://issues.apache.org/jira/browse/HIVE-22951 Project: Hive Issue Type: Bug Affects Versions: 4.0.0 Reporter: zhangbutao Build hive code source: {code:java} mvn clean install -DskipTests -Pdist{code} Some xml file or jar would be asked from invalid repository. Some logs of maven as follow: {code:java} Downloading from druid-apache-rc-testing: https://repository.apache.org/content/repositories/orgapachedruid-1001/org/apache/orc/orc-core/1.5.9/orc-core-1.5.9.jar{code} We shoule fix the invalid repo url in pom. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-22950) Fix invalid repo url in pom
zhangbutao created HIVE-22950: - Summary: Fix invalid repo url in pom Key: HIVE-22950 URL: https://issues.apache.org/jira/browse/HIVE-22950 Project: Hive Issue Type: Bug Affects Versions: 4.0.0 Reporter: zhangbutao Build hive code source: {code:java} mvn clean install -DskipTests -Pdist{code} Some xml file or jar would be asked from invalid repository. Some logs of maven as follow: {code:java} Downloading from druid-apache-rc-testing: https://repository.apache.org/content/repositories/orgapachedruid-1001/org/apache/orc/orc-core/1.5.9/orc-core-1.5.9.jar{code} We shoule fix the invalid repo url in pom. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-22633) GROUP BY query with SET hive.groupby.skewindata=true causes "java.lang.NullPointerException"
zhangbutao created HIVE-22633: - Summary: GROUP BY query with SET hive.groupby.skewindata=true causes "java.lang.NullPointerException" Key: HIVE-22633 URL: https://issues.apache.org/jira/browse/HIVE-22633 Project: Hive Issue Type: Bug Affects Versions: 3.1.1, 3.1.0 Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-22550) Result of hive.query.string in job xml contains encoded string
zhangbutao created HIVE-22550: - Summary: Result of hive.query.string in job xml contains encoded string Key: HIVE-22550 URL: https://issues.apache.org/jira/browse/HIVE-22550 Project: Hive Issue Type: Bug Affects Versions: 3.1.1, 3.1.0 Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-22527) Hive on Tez : Job of merging samll files will be submitted into another queue (default queue)
zhangbutao created HIVE-22527: - Summary: Hive on Tez : Job of merging samll files will be submitted into another queue (default queue) Key: HIVE-22527 URL: https://issues.apache.org/jira/browse/HIVE-22527 Project: Hive Issue Type: Bug Affects Versions: 3.1.1, 3.1.0 Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-22368) Hive JDBC Storage Handler: some mysql data type can not be cast to hive data type
zhangbutao created HIVE-22368: - Summary: Hive JDBC Storage Handler: some mysql data type can not be cast to hive data type Key: HIVE-22368 URL: https://issues.apache.org/jira/browse/HIVE-22368 Project: Hive Issue Type: Bug Reporter: zhangbutao -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-21123) LLAP : same sql and TPCDS data occurs different results using textfile fileformat
zhangbutao created HIVE-21123: - Summary: LLAP : same sql and TPCDS data occurs different results using textfile fileformat Key: HIVE-21123 URL: https://issues.apache.org/jira/browse/HIVE-21123 Project: Hive Issue Type: Bug Components: File Formats, llap Affects Versions: 3.1.0 Reporter: zhangbutao -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HIVE-20864) No running LLAP daemons! Please check LLAP service status and zookeeper configuration
zhangbutao created HIVE-20864: - Summary: No running LLAP daemons! Please check LLAP service status and zookeeper configuration Key: HIVE-20864 URL: https://issues.apache.org/jira/browse/HIVE-20864 Project: Hive Issue Type: Bug Components: llap Affects Versions: 3.1.0 Reporter: zhangbutao I test hive3.1.0 using TPCDS. Jobs would be falied after executing some sql, and i have checked that LLAP daemons were still alive as well as LLAP zknodes still existed in zookeeper. The error log as follows: {code:java} ]Vertex failed, vertexName=Map 6, vertexId=vertex_1541034860050_0181_194_04, diagnostics=[Vertex vertex_1541034860050_0181_194_04 [Map 6] killed/failed due to:INIT_FAILURE, Fail to create InputInitializerManager, org.apache.tez.dag.api.TezReflectionException: Unable to instantiate class with 1 arguments: org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:71) at org.apache.tez.common.ReflectionUtils.createClazzInstance(ReflectionUtils.java:89) at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:152) at org.apache.tez.dag.app.dag.RootInputInitializerManager$1.run(RootInputInitializerManager.java:148) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682) at org.apache.tez.dag.app.dag.RootInputInitializerManager.createInitializer(RootInputInitializerManager.java:148) at org.apache.tez.dag.app.dag.RootInputInitializerManager.runInputInitializers(RootInputInitializerManager.java:121) at org.apache.tez.dag.app.dag.impl.VertexImpl.setupInputInitializerManager(VertexImpl.java:4101) at org.apache.tez.dag.app.dag.impl.VertexImpl.access$3100(VertexImpl.java:205) at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.handleInitEvent(VertexImpl.java:2912) at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2859) at org.apache.tez.dag.app.dag.impl.VertexImpl$InitTransition.transition(VertexImpl.java:2841) at org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385) at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302) at org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46) at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487) at org.apache.tez.state.StateMachineTez.doTransition(StateMachineTez.java:59) at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:1939) at org.apache.tez.dag.app.dag.impl.VertexImpl.handle(VertexImpl.java:204) at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2317) at org.apache.tez.dag.app.DAGAppMaster$VertexEventDispatcher.handle(DAGAppMaster.java:2303) at org.apache.tez.common.AsyncDispatcher.dispatch(AsyncDispatcher.java:180) at org.apache.tez.common.AsyncDispatcher$1.run(AsyncDispatcher.java:115) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.GeneratedConstructorAccessor45.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.tez.common.ReflectionUtils.getNewInstance(ReflectionUtils.java:68) ... 25 more Caused by: java.lang.IllegalArgumentException: No running LLAP daemons! Please check LLAP service status and zookeeper configuration at com.google.common.base.Preconditions.checkArgument(Preconditions.java:122) at org.apache.hadoop.hive.ql.exec.tez.Utils.getSplitLocationProvider(Utils.java:57) at org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.(HiveSplitGenerator.java:140) ... 29 more {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HIVE-20611) HiverServer2 webui is unable to display queries data after enable kerberos
zhangbutao created HIVE-20611: - Summary: HiverServer2 webui is unable to display queries data after enable kerberos Key: HIVE-20611 URL: https://issues.apache.org/jira/browse/HIVE-20611 Project: Hive Issue Type: Bug Components: HiveServer2, Web UI Affects Versions: 3.1.0 Reporter: zhangbutao The HS2 web ui could display quires data is if kerberos is not enabled, but does not display in a Kerberos enabled environment. Is there any other special configuration for HS2 webui if enable kerberos ? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HIVE-20548) Can not start llp via yarn service
zhangbutao created HIVE-20548: - Summary: Can not start llp via yarn service Key: HIVE-20548 URL: https://issues.apache.org/jira/browse/HIVE-20548 Project: Hive Issue Type: Bug Components: llap Affects Versions: 3.1.0 Reporter: zhangbutao We start llap through yarn service instead of slider, and some problems happen as follows: {code:java} 2018-09-12 19:32:48,629 - LLAP start command: /usr/bch/current/hive-server2/bin/hive --service llap --size 10930m --startImmediately --name llap0 --cache 0m --xmx 8m --loglevel INFO --output /var/lib/ambari-agent/tmp/llap-yarn-service_2018-09-12_11-32-48 --service-placement 4 --skiphadoopversion --skiphbasecp --instances 1 --logger query-routing --args " -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:TLABSize=8m -XX:+ResizeTLAB -XX:+UseNUMA -XX:+AggressiveOpts -XX:InitiatingHeapOccupancyPercent=70 -XX:+UnlockExperimentalVMOptions -XX:G1MaxNewSizePercent=40 -XX:G1ReservePercent=20 -XX:MaxGCPauseMillis=200 -XX:MetaspaceSize=1024m" SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/bch/3.0.0/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/bch/3.0.0/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] WARN conf.HiveConf: HiveConf of name hive.hook.proto.base-directory does not exist WARN conf.HiveConf: HiveConf of name hive.strict.managed.tables does not exist WARN conf.HiveConf: HiveConf of name hive.stats.fetch.partition.stats does not exist WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist WARN conf.HiveConf: HiveConf of name hive.druid.select.distribute does not exist WARN cli.LlapServiceDriver: Ignoring unknown llap server parameter: [hive.aux.jars.path] WARN cli.LlapServiceDriver: Java versions might not match : JAVA_HOME=[/usr/jdk64/jdk1.8.0_112],process jre=[/usr/jdk64/jdk1.8.0_112/jre] WARN conf.HiveConf: HiveConf of name hive.hook.proto.base-directory does not exist WARN conf.HiveConf: HiveConf of name hive.strict.managed.tables does not exist WARN conf.HiveConf: HiveConf of name hive.stats.fetch.partition.stats does not exist WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist WARN conf.HiveConf: HiveConf of name hive.druid.select.distribute does not exist WARN conf.HiveConf: HiveConf of name hive.hook.proto.base-directory does not exist WARN conf.HiveConf: HiveConf of name hive.strict.managed.tables does not exist WARN conf.HiveConf: HiveConf of name hive.stats.fetch.partition.stats does not exist WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist WARN conf.HiveConf: HiveConf of name hive.druid.select.distribute does not exist 11:32:54 Running as a child of LlapServiceDriver 11:32:54 Prepared the files 11:33:13 Packaged the files WARN curator.CuratorZookeeperClient: session timeout [1] is less than connection timeout [15000] ERROR client.ServiceClient: Error on destroy 'llap0': not found. WARN client.ServiceClient: Property yarn.service.framework.path has a value /bch/apps/3.0.0/yarn/service-dep.tar.gz, but is not a valid file 2018-09-12 19:33:17,385 - 2018-09-12 19:33:17,385 - LLAP status command : /usr/bch/current/hive-server2/bin/hive --service llapstatus -w -r 0.8 -i 2 -t 400 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/bch/3.0.0/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/bch/3.0.0/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] WARN conf.HiveConf: HiveConf of name hive.hook.proto.base-directory does not exist WARN conf.HiveConf: HiveConf of name hive.strict.managed.tables does not exist WARN conf.HiveConf: HiveConf of name hive.stats.fetch.partition.stats does not exist WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist WARN conf.HiveConf: HiveConf of name hive.druid.select.distribute does not exist LLAPSTATUS WatchMode with timeout=400 s LLAP Starting up with AppId=application_1536745653378_0002. WARN cli.LlapStatusServiceDriver: COMPLETE state reached while waiting for RUNNING state. Failing. Final diagnostics: null LLAP Application already complete. ApplicationId=application_1536745653378_0002
[jira] [Created] (HIVE-20401) HiveServer2 is blocked at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1144)
zhangbutao created HIVE-20401: - Summary: HiveServer2 is blocked at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1144) Key: HIVE-20401 URL: https://issues.apache.org/jira/browse/HIVE-20401 Project: Hive Issue Type: Bug Components: Beeline, HiveServer2 Affects Versions: 1.2.2 Reporter: zhangbutao HiveServer2 process often gets stuck , and jstack shows about one hundred thread is blocked at the following code,waiting the resource {code:java} 0x0003c0acca40 {code} : {code:java} "HiveServer2-Handler-Pool: Thread-935985" #935985 prio=5 os_prio=0 tid=0x7fd71470c000 nid=0x3e1f waiting for monitor entry [0x7fd6d6eee000] at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1144) - waiting to lock <0x0003c0acca40> (a java.lang.Object) at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1139) at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:110) at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:181) at org.apache.hive.service.cli.operation.Operation.run(Operation.java:257) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:423) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:410) at sun.reflect.GeneratedMethodAccessor199.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78) at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36) at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962) at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59) at com.sun.proxy.$Proxy22.executeStatementAsync(Unknown Source) at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:275) at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:492) at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313) at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:698) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) {code} ; We find one thread locked the resource {code:java} 0x0003c0acca40 {code} ,and the code is as follow: {code:java} "HiveServer2-Handler-Pool: Thread-809399" #809399 prio=5 os_prio=0 tid=0x7fd71433e000 nid=0x37c0 in Object.wait() [0x7fd6d04b8000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502) at org.apache.hadoop.util.concurrent.AsyncGet$Util.wait(AsyncGet.java:59) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1483) - locked <0x0003d9ab5d80> (a org.apache.hadoop.ipc.Client$Call) at org.apache.hadoop.ipc.Client.call(Client.java:1441) at org.apache.hadoop.ipc.Client.call(Client.java:1351) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:235) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy18.getEZForPath(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getEZForPath(ClientNamenodeProtocolTranslatorPB.java:1413) at sun.reflect.GeneratedMethodAccessor225.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409) at