[jira] [Commented] (HIVE-27858) OOM happens when selecting many columns and JOIN.
[ https://issues.apache.org/jira/browse/HIVE-27858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784696#comment-17784696 ] Ryu Kobayashi commented on HIVE-27858: -- [~glapark] If it run it without records, I don't know because OOM is currently occurring on Hive 4 and master, but it will complete in about 30 seconds on Hive 2. {code:java} Query Execution Summary -- OPERATIONDURATION -- Compile Query 15.46s Prepare Plan 12.06s Submit Plan 0.07s Start DAG 0.21s Run DAG 0.46s -- {code} > OOM happens when selecting many columns and JOIN. > -- > > Key: HIVE-27858 > URL: https://issues.apache.org/jira/browse/HIVE-27858 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 4.0.0-beta-1 >Reporter: Ryu Kobayashi >Priority: Major > Attachments: ddl.sql, query.sql > > > OOM happens when executing [^query.sql] using a table in [^ddl.sql]. These > did not happen in Hive 2 previously. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27868) Backport a subset of HIVE-18755 to branch-2.3 to support reading from catalog in HMS 3+
[ https://issues.apache.org/jira/browse/HIVE-27868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-27868: -- Labels: pull-request-available (was: ) > Backport a subset of HIVE-18755 to branch-2.3 to support reading from catalog > in HMS 3+ > --- > > Key: HIVE-27868 > URL: https://issues.apache.org/jira/browse/HIVE-27868 > Project: Hive > Issue Type: Improvement >Reporter: Chao Sun >Priority: Major > Labels: pull-request-available > > HIVE-18755 introduced the concept of catalog which adds another level of > namespace on top of tables and databases. Given HMS using Hive 3.x already > has this feature and Hive 2.3 client is commonly used to talk to these > metastores through frameworks such as Spark, this JIRA proposes to backport a > subset of the features to allow Hive 2.3 client to specify catalog to read > from the 3.x metastores. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27868) Backport a subset of HIVE-18755 to branch-2.3 to support reading from catalog in HMS 3+
[ https://issues.apache.org/jira/browse/HIVE-27868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HIVE-27868: Description: HIVE-18755 introduced the concept of catalog which adds another level of namespace on top of tables and databases. Given HMS using Hive 3.x already has this feature and Hive 2.3 client is commonly used to talk to these metastores through frameworks such as Spark, this JIRA proposes to backport a subset of the features to allow Hive 2.3 client to specify catalog to read from the 3.x metastores. > Backport a subset of HIVE-18755 to branch-2.3 to support reading from catalog > in HMS 3+ > --- > > Key: HIVE-27868 > URL: https://issues.apache.org/jira/browse/HIVE-27868 > Project: Hive > Issue Type: Improvement >Reporter: Chao Sun >Priority: Major > > HIVE-18755 introduced the concept of catalog which adds another level of > namespace on top of tables and databases. Given HMS using Hive 3.x already > has this feature and Hive 2.3 client is commonly used to talk to these > metastores through frameworks such as Spark, this JIRA proposes to backport a > subset of the features to allow Hive 2.3 client to specify catalog to read > from the 3.x metastores. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HIVE-27864) Update plugin for SBOM generation to 2.7.10
[ https://issues.apache.org/jira/browse/HIVE-27864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena resolved HIVE-27864. - Fix Version/s: 4.0.0 Resolution: Fixed > Update plugin for SBOM generation to 2.7.10 > --- > > Key: HIVE-27864 > URL: https://issues.apache.org/jira/browse/HIVE-27864 > Project: Hive > Issue Type: Improvement >Reporter: Vinod Anandan >Assignee: Vinod Anandan >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > Update the CycloneDX Maven plugin for SBOM generation to 2.7.10 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27864) Update plugin for SBOM generation to 2.7.10
[ https://issues.apache.org/jira/browse/HIVE-27864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784676#comment-17784676 ] Ayush Saxena commented on HIVE-27864: - Committed to master. Thanx [~vinodanandan] for the contribution!!! Welcome to Hive :) > Update plugin for SBOM generation to 2.7.10 > --- > > Key: HIVE-27864 > URL: https://issues.apache.org/jira/browse/HIVE-27864 > Project: Hive > Issue Type: Improvement >Reporter: Vinod Anandan >Assignee: Vinod Anandan >Priority: Major > Labels: pull-request-available > > Update the CycloneDX Maven plugin for SBOM generation to 2.7.10 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-27864) Update plugin for SBOM generation to 2.7.10
[ https://issues.apache.org/jira/browse/HIVE-27864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena reassigned HIVE-27864: --- Assignee: Vinod Anandan > Update plugin for SBOM generation to 2.7.10 > --- > > Key: HIVE-27864 > URL: https://issues.apache.org/jira/browse/HIVE-27864 > Project: Hive > Issue Type: Improvement >Reporter: Vinod Anandan >Assignee: Vinod Anandan >Priority: Major > Labels: pull-request-available > > Update the CycloneDX Maven plugin for SBOM generation to 2.7.10 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27868) Backport a subset of HIVE-18755 to branch-2.3 to support reading from catalog in HMS 3+
Chao Sun created HIVE-27868: --- Summary: Backport a subset of HIVE-18755 to branch-2.3 to support reading from catalog in HMS 3+ Key: HIVE-27868 URL: https://issues.apache.org/jira/browse/HIVE-27868 Project: Hive Issue Type: Improvement Reporter: Chao Sun -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-22271) Create index on the TBL_COL_PRIVS table for the columns COLUMN_NAME, PRINCIPAL_NAME, PRINCIPAL_TYPE and TBL_ID
[ https://issues.apache.org/jira/browse/HIVE-22271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17784607#comment-17784607 ] Jose Martinez Poblete commented on HIVE-22271: -- The parameter is actually *{{hive.privilege.synchronizer=false}}* Needs to be set on the HMS/HS2/Spark/whatever *{{hive-site.xml}}* Just for clarity > Create index on the TBL_COL_PRIVS table for the columns COLUMN_NAME, > PRINCIPAL_NAME, PRINCIPAL_TYPE and TBL_ID > -- > > Key: HIVE-22271 > URL: https://issues.apache.org/jira/browse/HIVE-22271 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Marta Kuczora >Assignee: wenjun ma >Priority: Major > > In one of the escalations for HDP-3.1.0 we found that the table privilege > checks could be very slow and these checks could be speed up by defining an > INDEX on the TBL_COL_PRIVS table for the following columns: > COLUMN_NAME,PRINCIPAL_NAME,PRINCIPAL_TYPE,TBL_ID > In the MYSQL slow query log, we found that the following query is executed > slowly: > {noformat} > SELECT DISTINCT > 'org.apache.hadoop.hive.metastore.model.MTableColumnPrivilege' AS > `NUCLEUS_TYPE`,`A0`.`AUTHORIZER`,`A0`.`COLUMN_NAME`,`A0`.`CREATE_TIME`,`A0`.`GRANT_OPTION`,`A0`.`GRANTOR`,`A0`.`GRANTOR_TYPE`,`A0`.`PRINCIPAL_NAME`,`A0`.`PRINCIPAL_TYPE`,`A0`.`TBL_COL_PRIV`,`A0`.`TBL_COLUMN_GRANT_ID` > FROM `TBL_COL_PRIVS` `A0` LEFT OUTER JOIN `TBLS` `B0` ON `A0`.`TBL_ID` = > `B0`.`TBL_ID` LEFT OUTER JOIN `DBS` `C0` ON `B0`.`DB_ID` = `C0`.`DB_ID` WHERE > `A0`.`PRINCIPAL_NAME` = 'xxx' AND `A0`.`PRINCIPAL_TYPE` = 'GROUP' AND > `B0`.`TBL_NAME` = '' AND `C0`.`NAME` = 'xxx' AND `C0`.`CTLG_NAME` = 'xxx' > AND `A0`.`COLUMN_NAME` = 'xxx' > {noformat} > When checked the explain plan of the this query, it could be seen that the > index defined on the TBL_COL_PRIVS table is not used. In the slow query, the > COLUMN_NAME, PRINCIPAL_NAME, PRINCIPAL_TYPE and TBL_ID columns were used, and > after creating an index on these columns only, we saw significant performance > improvement. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27867) Incremental materialized view throws NPE whew Iceberg source table is empty
[ https://issues.apache.org/jira/browse/HIVE-27867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-27867: -- Labels: iceberg materializedviews pull-request-available (was: iceberg materializedviews) > Incremental materialized view throws NPE whew Iceberg source table is empty > --- > > Key: HIVE-27867 > URL: https://issues.apache.org/jira/browse/HIVE-27867 > Project: Hive > Issue Type: Bug >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Major > Labels: iceberg, materializedviews, pull-request-available > Fix For: 4.0.0 > > > Repro > https://github.com/apache/hive/blob/master/iceberg/iceberg-handler/src/test/queries/positive/mv_iceberg_orc.q > in hive.log > {code} > 2023-11-09T05:17:05,625 WARN [e35c7637-b0ba-4e30-8448-5bdc0d0e4779 main] > rebuild.AlterMaterializedViewRebuildAnalyzer: Exception loading materialized > views > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.metadata.Hive.getValidMaterializedViews(Hive.java:2321) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.metadata.Hive.getMaterializedViewForRebuild(Hive.java:2227) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.ddl.view.materialized.alter.rebuild.AlterMaterializedViewRebuildAnalyzer$MVRebuildCalcitePlannerAction.applyMaterializedViewRewriting(AlterMaterializedViewRebuildAnaly > zer.java:215) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1700) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1569) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.calcite.tools.Frameworks.lambda$withPlanner$0(Frameworks.java:131) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:914) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:180) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:126) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1321) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:570) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:13113) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:465) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.ddl.view.materialized.alter.rebuild.AlterMaterializedViewRebuildAnalyzer.analyzeInternal(AlterMaterializedViewRebuildAnalyzer.java:135) > ~[hive-exec-4.0.0-beta-2-SNAPSH > OT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:180) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:224) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:107) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:519) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:471) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:436) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at >
[jira] [Work started] (HIVE-27867) Incremental materialized view throws NPE whew Iceberg source table is empty
[ https://issues.apache.org/jira/browse/HIVE-27867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-27867 started by Krisztian Kasa. - > Incremental materialized view throws NPE whew Iceberg source table is empty > --- > > Key: HIVE-27867 > URL: https://issues.apache.org/jira/browse/HIVE-27867 > Project: Hive > Issue Type: Bug >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Major > Labels: iceberg, materializedviews, pull-request-available > Fix For: 4.0.0 > > > Repro > https://github.com/apache/hive/blob/master/iceberg/iceberg-handler/src/test/queries/positive/mv_iceberg_orc.q > in hive.log > {code} > 2023-11-09T05:17:05,625 WARN [e35c7637-b0ba-4e30-8448-5bdc0d0e4779 main] > rebuild.AlterMaterializedViewRebuildAnalyzer: Exception loading materialized > views > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.metadata.Hive.getValidMaterializedViews(Hive.java:2321) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.metadata.Hive.getMaterializedViewForRebuild(Hive.java:2227) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.ddl.view.materialized.alter.rebuild.AlterMaterializedViewRebuildAnalyzer$MVRebuildCalcitePlannerAction.applyMaterializedViewRewriting(AlterMaterializedViewRebuildAnaly > zer.java:215) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1700) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1569) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.calcite.tools.Frameworks.lambda$withPlanner$0(Frameworks.java:131) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:914) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:180) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:126) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1321) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:570) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:13113) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:465) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.ddl.view.materialized.alter.rebuild.AlterMaterializedViewRebuildAnalyzer.analyzeInternal(AlterMaterializedViewRebuildAnalyzer.java:135) > ~[hive-exec-4.0.0-beta-2-SNAPSH > OT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:180) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:224) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:107) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:519) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:471) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:436) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:430) >
[jira] [Created] (HIVE-27867) Incremental materialized view throws NPE whew Iceberg source table is empty
Krisztian Kasa created HIVE-27867: - Summary: Incremental materialized view throws NPE whew Iceberg source table is empty Key: HIVE-27867 URL: https://issues.apache.org/jira/browse/HIVE-27867 Project: Hive Issue Type: Bug Reporter: Krisztian Kasa Assignee: Krisztian Kasa Fix For: 4.0.0 Repro https://github.com/apache/hive/blob/master/iceberg/iceberg-handler/src/test/queries/positive/mv_iceberg_orc.q in hive.log {code} 2023-11-09T05:17:05,625 WARN [e35c7637-b0ba-4e30-8448-5bdc0d0e4779 main] rebuild.AlterMaterializedViewRebuildAnalyzer: Exception loading materialized views org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.NullPointerException at org.apache.hadoop.hive.ql.metadata.Hive.getValidMaterializedViews(Hive.java:2321) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.metadata.Hive.getMaterializedViewForRebuild(Hive.java:2227) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.ddl.view.materialized.alter.rebuild.AlterMaterializedViewRebuildAnalyzer$MVRebuildCalcitePlannerAction.applyMaterializedViewRewriting(AlterMaterializedViewRebuildAnaly zer.java:215) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1700) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1569) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.calcite.tools.Frameworks.lambda$withPlanner$0(Frameworks.java:131) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:914) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:180) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:126) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1321) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:570) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:13113) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:465) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.ddl.view.materialized.alter.rebuild.AlterMaterializedViewRebuildAnalyzer.analyzeInternal(AlterMaterializedViewRebuildAnalyzer.java:135) ~[hive-exec-4.0.0-beta-2-SNAPSH OT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:180) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:224) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:107) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:519) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:471) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:436) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:430) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:121) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:227) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:257) ~[hive-cli-4.0.0-beta-2-SNAPSHOT.jar:?]
[jira] [Updated] (HIVE-27867) Incremental materialized view throws NPE whew Iceberg source table is empty
[ https://issues.apache.org/jira/browse/HIVE-27867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Krisztian Kasa updated HIVE-27867: -- Labels: iceberg materializedviews (was: ) > Incremental materialized view throws NPE whew Iceberg source table is empty > --- > > Key: HIVE-27867 > URL: https://issues.apache.org/jira/browse/HIVE-27867 > Project: Hive > Issue Type: Bug >Reporter: Krisztian Kasa >Assignee: Krisztian Kasa >Priority: Major > Labels: iceberg, materializedviews > Fix For: 4.0.0 > > > Repro > https://github.com/apache/hive/blob/master/iceberg/iceberg-handler/src/test/queries/positive/mv_iceberg_orc.q > in hive.log > {code} > 2023-11-09T05:17:05,625 WARN [e35c7637-b0ba-4e30-8448-5bdc0d0e4779 main] > rebuild.AlterMaterializedViewRebuildAnalyzer: Exception loading materialized > views > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.metadata.Hive.getValidMaterializedViews(Hive.java:2321) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.metadata.Hive.getMaterializedViewForRebuild(Hive.java:2227) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.ddl.view.materialized.alter.rebuild.AlterMaterializedViewRebuildAnalyzer$MVRebuildCalcitePlannerAction.applyMaterializedViewRewriting(AlterMaterializedViewRebuildAnaly > zer.java:215) ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1700) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1569) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.calcite.tools.Frameworks.lambda$withPlanner$0(Frameworks.java:131) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.calcite.prepare.CalcitePrepareImpl.perform(CalcitePrepareImpl.java:914) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:180) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:126) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1321) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:570) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:13113) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:465) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.ddl.view.materialized.alter.rebuild.AlterMaterializedViewRebuildAnalyzer.analyzeInternal(AlterMaterializedViewRebuildAnalyzer.java:135) > ~[hive-exec-4.0.0-beta-2-SNAPSH > OT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:180) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at org.apache.hadoop.hive.ql.Compiler.analyze(Compiler.java:224) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at org.apache.hadoop.hive.ql.Compiler.compile(Compiler.java:107) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:519) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:471) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:436) > ~[hive-exec-4.0.0-beta-2-SNAPSHOT.jar:4.0.0-beta-2-SNAPSHOT] > at > org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:430) >
[jira] [Updated] (HIVE-27491) HPL/SQL does not allow variables in update statements
[ https://issues.apache.org/jira/browse/HIVE-27491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-27491: -- Labels: pull-request-available (was: ) > HPL/SQL does not allow variables in update statements > - > > Key: HIVE-27491 > URL: https://issues.apache.org/jira/browse/HIVE-27491 > Project: Hive > Issue Type: Bug > Components: hpl/sql >Reporter: Dayakar M >Assignee: Dayakar M >Priority: Major > Labels: pull-request-available > > HPL/SQL does not allow variables in update statements > Works in Oracle: > {noformat} > DECLARE > val_to_update varchar(10); > BEGIN > val_to_update := 'one'; > FOR REC in (select a,b from test1 where a = val_to_update) LOOP > dbms_output.put_line (rec.a); > dbms_output.put_line (rec.b); > END LOOP; > update test1 set b = 'another' > where a = val_to_update; > end;{noformat} > Doesn't work in Hive: > {noformat} > DECLARE > val_to_update STRING; > BEGIN > val_to_update := 'one'; > FOR REC in (select a,b from test where a = val_to_update) LOOP > print (rec.a); > print (rec.b); > END LOOP; > update test set b = 'another test' > where a = val_to_update; > end; > / > ERROR : FAILED: SemanticException [Error 10004]: Line 2:14 Invalid table > alias or column reference 'val_to_update': (possible column names are: a, b) > org.apache.hadoop.hive.ql.parse.SemanticException: Line 2:14 Invalid table > alias or column reference 'val_to_update': (possible column names are: a, b) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:13636) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:13575) > ... > {noformat} > > Select (not update) does work in hive: > {noformat} > DECLARE > val_to_update STRING; > BEGIN > val_to_update := 'one'; > FOR REC in (select a,b from test where a = val_to_update) LOOP > print (rec.a); > print (rec.b); > END LOOP; > select * from test > where a = val_to_update; > end; > /{noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27866) JDBC: HttpRequestInterceptorBase should not add an empty "Cookie:" header to the request if no custom cookies have been specified
[ https://issues.apache.org/jira/browse/HIVE-27866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-27866: -- Labels: pull-request-available (was: ) > JDBC: HttpRequestInterceptorBase should not add an empty "Cookie:" header to > the request if no custom cookies have been specified > - > > Key: HIVE-27866 > URL: https://issues.apache.org/jira/browse/HIVE-27866 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 4.0.0-beta-1 >Reporter: Gergely Farkas >Assignee: Gergely Farkas >Priority: Major > Labels: pull-request-available > > While debugging session cookies of a sticky session, I noticed that the JDBC > driver adds an empty "Cookie" header to the request if no custom cookie is > configured. This is both unnecessary and unfortunately intermittently > interferes with the sticky session handling in the kubernetes nginx ingress > controller, so I created this ticket to omit the empty "Cookie" header. > some logs from the debug session: > {noformat} > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> POST > /cliservice HTTP/1.1 > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> > Content-Type: application/x-thrift > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> Accept: > application/x-thrift > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> > User-Agent: Java/THttpClient/HC > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> Cookie: > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> > Content-Length: 85 > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> Host: > hs2-gfarkas1102d.apps.shared-rke-dev-01.kcloud.cloudera.com:443 > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> > Connection: Keep-Alive > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> Cookie: > NGINX_HS2_CLIENT_COOKIE=c3841bfcbfe977d6d38f33540a726fa6|581343ccdf5ff27614eb3667d4be1ded; > > hive.server2.auth=cu=dwxdevuser=-2881699162572965070=OyfcNcLzBz0h6AhDutto0M6jTNhpk+KfkJjp//q2lCg= > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> > Accept-Encoding: gzip,deflate > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> > X-XSRF-HEADER: true > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> > X-CSRF-TOKEN: true{noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27692) Explore removing the always task from embedded HMS
[ https://issues.apache.org/jira/browse/HIVE-27692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhihua Deng updated HIVE-27692: --- Description: The always tasks are running in the leader HMS now, the properties for configuring the leader should only belong to HMS, other engines such as Spark/Impala doesn't need to know these properties. For most cases, the engine only cares about the properties for connecting HMS, e.g, hive.metastore.uris. Every time when a new apps uses an embedded Metastore, it will start the HMS always tasks by default. Imaging we have hundreds of apps, then hundreds of pieces of the same tasks are running, this will put extra burden to the underlying databases, such as the flooding queries, connection limit. I think we can remove always tasks from the embeded Metastore, the always task will be taken care of by the standalone Metastore, as a standalone Metastore should be here in production environment. was: The always tasks are running in the leader HMS now, the properties for configuring the leader should only belong to HMS, other engines such as Spark/Impala doesn't need to know these properties. For most cases, the engine only cares about the properties for connecting HMS, e.g, hive.metastore.uris. Every time when a new apps uses an embedded Metastore, it will start the HMS always tasks by default. Imaging we have hundreds of apps, then hundreds of pieces of tasks are running, this will put extra burden to the underlying databases, such as the flooding queries, connection limit. I think we can remove always tasks from the embeded Metastore, the always task will be taken care of by the standalone Metastore and a standalone Metastore should be here in production environment. > Explore removing the always task from embedded HMS > -- > > Key: HIVE-27692 > URL: https://issues.apache.org/jira/browse/HIVE-27692 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Reporter: Zhihua Deng >Assignee: Zhihua Deng >Priority: Major > Labels: pull-request-available > > The always tasks are running in the leader HMS now, the properties for > configuring the leader should only belong to HMS, other engines such as > Spark/Impala doesn't need to know these properties. For most cases, the > engine only cares about the properties for connecting HMS, e.g, > hive.metastore.uris. > Every time when a new apps uses an embedded Metastore, it will start the HMS > always tasks by default. Imaging we have hundreds of apps, then hundreds of > pieces of the same tasks are running, this will put extra burden to the > underlying databases, such as the flooding queries, connection limit. > I think we can remove always tasks from the embeded Metastore, the always > task will be taken care of by the standalone Metastore, as a standalone > Metastore should be here in production environment. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Work started] (HIVE-27866) JDBC: HttpRequestInterceptorBase should not add an empty "Cookie:" header to the request if no custom cookies have been specified
[ https://issues.apache.org/jira/browse/HIVE-27866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-27866 started by Gergely Farkas. - > JDBC: HttpRequestInterceptorBase should not add an empty "Cookie:" header to > the request if no custom cookies have been specified > - > > Key: HIVE-27866 > URL: https://issues.apache.org/jira/browse/HIVE-27866 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 4.0.0-beta-1 >Reporter: Gergely Farkas >Assignee: Gergely Farkas >Priority: Major > > While debugging session cookies of a sticky session, I noticed that the JDBC > driver adds an empty "Cookie" header to the request if no custom cookie is > configured. This is both unnecessary and unfortunately intermittently > interferes with the sticky session handling in the kubernetes nginx ingress > controller, so I created this ticket to omit the empty "Cookie" header. > some logs from the debug session: > {noformat} > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> POST > /cliservice HTTP/1.1 > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> > Content-Type: application/x-thrift > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> Accept: > application/x-thrift > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> > User-Agent: Java/THttpClient/HC > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> Cookie: > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> > Content-Length: 85 > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> Host: > hs2-gfarkas1102d.apps.shared-rke-dev-01.kcloud.cloudera.com:443 > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> > Connection: Keep-Alive > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> Cookie: > NGINX_HS2_CLIENT_COOKIE=c3841bfcbfe977d6d38f33540a726fa6|581343ccdf5ff27614eb3667d4be1ded; > > hive.server2.auth=cu=dwxdevuser=-2881699162572965070=OyfcNcLzBz0h6AhDutto0M6jTNhpk+KfkJjp//q2lCg= > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> > Accept-Encoding: gzip,deflate > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> > X-XSRF-HEADER: true > 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> > X-CSRF-TOKEN: true{noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27866) JDBC: HttpRequestInterceptorBase should not add an empty "Cookie:" header to the request if no custom cookies have been specified
Gergely Farkas created HIVE-27866: - Summary: JDBC: HttpRequestInterceptorBase should not add an empty "Cookie:" header to the request if no custom cookies have been specified Key: HIVE-27866 URL: https://issues.apache.org/jira/browse/HIVE-27866 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 4.0.0-beta-1 Reporter: Gergely Farkas Assignee: Gergely Farkas While debugging session cookies of a sticky session, I noticed that the JDBC driver adds an empty "Cookie" header to the request if no custom cookie is configured. This is both unnecessary and unfortunately intermittently interferes with the sticky session handling in the kubernetes nginx ingress controller, so I created this ticket to omit the empty "Cookie" header. some logs from the debug session: {noformat} 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> POST /cliservice HTTP/1.1 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> Content-Type: application/x-thrift 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> Accept: application/x-thrift 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> User-Agent: Java/THttpClient/HC 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> Cookie: 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> Content-Length: 85 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> Host: hs2-gfarkas1102d.apps.shared-rke-dev-01.kcloud.cloudera.com:443 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> Connection: Keep-Alive 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> Cookie: NGINX_HS2_CLIENT_COOKIE=c3841bfcbfe977d6d38f33540a726fa6|581343ccdf5ff27614eb3667d4be1ded; hive.server2.auth=cu=dwxdevuser=-2881699162572965070=OyfcNcLzBz0h6AhDutto0M6jTNhpk+KfkJjp//q2lCg= 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> Accept-Encoding: gzip,deflate 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> X-XSRF-HEADER: true 2023-11-08T17:18:05,616 DEBUG [main] http.headers: http-outgoing-0 >> X-CSRF-TOKEN: true{noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27865) HMS in http mode drops down silently with no errors
[ https://issues.apache.org/jira/browse/HIVE-27865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-27865: -- Labels: pull-request-available (was: ) > HMS in http mode drops down silently with no errors > --- > > Key: HIVE-27865 > URL: https://issues.apache.org/jira/browse/HIVE-27865 > Project: Hive > Issue Type: Bug >Reporter: Zhihua Deng >Assignee: Zhihua Deng >Priority: Major > Labels: pull-request-available > > After the HIVE-27340, threads in the jetty pool are all daemon threads, such > thread pool does not prevent the JVM from exiting when HMS finishes the main > thread: > {noformat} > metastore-0 metastore 1 96e7e920-067b-4134-9abe-c731d12bc8eb [mdc@18060 > class="metastore.HiveMetaStore" level="INFO" thread="main"] Started > HTTPServer for HMS > metastore-0 metastore 1 96e7e920-067b-4134-9abe-c731d12bc8eb [mdc@18060 > class="hook.AtlasHook" level="INFO" thread="shutdown-hook-0"] ==> Shutdown of > Atlas Hook{noformat} > We should provide a way to avoid the silent shutdown as what we do in > HiveServer2 http mode: > [https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java#L282] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (HIVE-27865) HMS in http mode drops down silently with no errors
[ https://issues.apache.org/jira/browse/HIVE-27865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhihua Deng reassigned HIVE-27865: -- Assignee: Zhihua Deng > HMS in http mode drops down silently with no errors > --- > > Key: HIVE-27865 > URL: https://issues.apache.org/jira/browse/HIVE-27865 > Project: Hive > Issue Type: Bug >Reporter: Zhihua Deng >Assignee: Zhihua Deng >Priority: Major > > After the HIVE-27340, threads in the jetty pool are all daemon threads, such > thread pool does not prevent the JVM from exiting when HMS finishes the main > thread: > {noformat} > metastore-0 metastore 1 96e7e920-067b-4134-9abe-c731d12bc8eb [mdc@18060 > class="metastore.HiveMetaStore" level="INFO" thread="main"] Started > HTTPServer for HMS > metastore-0 metastore 1 96e7e920-067b-4134-9abe-c731d12bc8eb [mdc@18060 > class="hook.AtlasHook" level="INFO" thread="shutdown-hook-0"] ==> Shutdown of > Atlas Hook{noformat} > We should provide a way to avoid the silent shutdown as what we do in > HiveServer2 http mode: > [https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java#L282] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27865) HMS in http mode drops down silently with no errors
[ https://issues.apache.org/jira/browse/HIVE-27865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhihua Deng updated HIVE-27865: --- Description: After the HIVE-27340, threads in the jetty pool are all daemon threads, such thread pool does not prevent the JVM from exiting when HMS finishes the main thread: {noformat} metastore-0 metastore 1 96e7e920-067b-4134-9abe-c731d12bc8eb [mdc@18060 class="metastore.HiveMetaStore" level="INFO" thread="main"] Started HTTPServer for HMS metastore-0 metastore 1 96e7e920-067b-4134-9abe-c731d12bc8eb [mdc@18060 class="hook.AtlasHook" level="INFO" thread="shutdown-hook-0"] ==> Shutdown of Atlas Hook{noformat} We should provide a way to avoid the silent shutdown as what we do in HiveServer2 http mode: [https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java#L282] was: After the HIVE-27340, threads in the jetty pool are all daemon threads, such thread pool does not prevent the JVM from exiting when HMS finishes the main thread: {noformat} metastore-0 metastore 1 96e7e920-067b-4134-9abe-c731d12bc8eb [mdc@18060 class="metastore.HiveMetaStore" level="INFO" thread="main"] Started HTTPServer for HMS metastore-0 metastore 1 96e7e920-067b-4134-9abe-c731d12bc8eb [mdc@18060 class="hook.AtlasHook" level="INFO" thread="shutdown-hook-0"] ==> Shutdown of Atlas Hook{noformat} We should provide a way to avoid the silent shutdown as what we do in HiveServer2 http mode: [https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java#L282] > HMS in http mode drops down silently with no errors > --- > > Key: HIVE-27865 > URL: https://issues.apache.org/jira/browse/HIVE-27865 > Project: Hive > Issue Type: Bug >Reporter: Zhihua Deng >Priority: Major > > After the HIVE-27340, threads in the jetty pool are all daemon threads, such > thread pool does not prevent the JVM from exiting when HMS finishes the main > thread: > {noformat} > metastore-0 metastore 1 96e7e920-067b-4134-9abe-c731d12bc8eb [mdc@18060 > class="metastore.HiveMetaStore" level="INFO" thread="main"] Started > HTTPServer for HMS > metastore-0 metastore 1 96e7e920-067b-4134-9abe-c731d12bc8eb [mdc@18060 > class="hook.AtlasHook" level="INFO" thread="shutdown-hook-0"] ==> Shutdown of > Atlas Hook{noformat} > We should provide a way to avoid the silent shutdown as what we do in > HiveServer2 http mode: > [https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java#L282] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27865) HMS in http mode drops down silently with no errors
Zhihua Deng created HIVE-27865: -- Summary: HMS in http mode drops down silently with no errors Key: HIVE-27865 URL: https://issues.apache.org/jira/browse/HIVE-27865 Project: Hive Issue Type: Bug Reporter: Zhihua Deng After the HIVE-27340, threads in the jetty pool are all daemon threads, such thread pool does not prevent the JVM from exiting when HMS finishes the main thread: {noformat} metastore-0 metastore 1 96e7e920-067b-4134-9abe-c731d12bc8eb [mdc@18060 class="metastore.HiveMetaStore" level="INFO" thread="main"] Started HTTPServer for HMS metastore-0 metastore 1 96e7e920-067b-4134-9abe-c731d12bc8eb [mdc@18060 class="hook.AtlasHook" level="INFO" thread="shutdown-hook-0"] ==> Shutdown of Atlas Hook{noformat} We should provide a way to avoid the silent shutdown as what we do in HiveServer2 http mode: [https://github.com/apache/hive/blob/master/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java#L282] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27864) Update plugin for SBOM generation to 2.7.10
[ https://issues.apache.org/jira/browse/HIVE-27864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-27864: -- Labels: pull-request-available (was: ) > Update plugin for SBOM generation to 2.7.10 > --- > > Key: HIVE-27864 > URL: https://issues.apache.org/jira/browse/HIVE-27864 > Project: Hive > Issue Type: Improvement >Reporter: Vinod Anandan >Priority: Major > Labels: pull-request-available > > Update the CycloneDX Maven plugin for SBOM generation to 2.7.10 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27864) Update plugin for SBOM generation to 2.7.10
Vinod Anandan created HIVE-27864: Summary: Update plugin for SBOM generation to 2.7.10 Key: HIVE-27864 URL: https://issues.apache.org/jira/browse/HIVE-27864 Project: Hive Issue Type: Improvement Reporter: Vinod Anandan Update the CycloneDX Maven plugin for SBOM generation to 2.7.10 -- This message was sent by Atlassian Jira (v8.20.10#820010)