[jira] [Assigned] (HIVE-24724) Create table with LIKE operator does not work correctly

2023-12-19 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-24724:
-

Assignee: KIRTI RUGE

> Create table with LIKE operator does not work correctly
> ---
>
> Key: HIVE-24724
> URL: https://issues.apache.org/jira/browse/HIVE-24724
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2
>Affects Versions: 4.0.0
>Reporter: Rajkumar Singh
>Assignee: KIRTI RUGE
>Priority: Major
>
> Steps to repro:
> {code:java}
> create table atable (id int, str1 string);
> alter table atable add constraint pk_atable primary key (id) disable 
> novalidate;
> create table btable like atable;
> {code}
> describe formatted btable lacks the constraints information.
>  CreateTableLikeDesc does not set/fetch the constraints for LIKE table
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java#L13594-L13616
> neither DDLTask fetches/set the constraints for the table. 
> https://github.com/apache/hive/blob/5ba3dfcb6470ff42c58a3f95f0d5e72050274a42/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/create/like/CreateTableLikeOperation.java#L58-L83



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-23709) jdbc_handler is flaky

2023-12-11 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-23709:
-

Assignee: KIRTI RUGE

> jdbc_handler is flaky
> -
>
> Key: HIVE-23709
> URL: https://issues.apache.org/jira/browse/HIVE-23709
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: KIRTI RUGE
>Priority: Major
>
> http://34.66.156.144:8080/job/hive-precommit/job/master/51/testReport/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-23690) TestNegativeCliDriver#[external_jdbc_negative] is flaky

2023-12-11 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-23690:
-

Assignee: KIRTI RUGE

> TestNegativeCliDriver#[external_jdbc_negative] is flaky
> ---
>
> Key: HIVE-23690
> URL: https://issues.apache.org/jira/browse/HIVE-23690
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: KIRTI RUGE
>Priority: Major
>
> failed after 10 tries:
> http://130.211.9.232/job/hive-flaky-check/34/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-27911) Drop database query failing with Invalid ACL Exception

2023-11-26 Thread KIRTI RUGE (Jira)
KIRTI RUGE created HIVE-27911:
-

 Summary: Drop database query failing with Invalid ACL Exception
 Key: HIVE-27911
 URL: https://issues.apache.org/jira/browse/HIVE-27911
 Project: Hive
  Issue Type: Improvement
Reporter: KIRTI RUGE


You may see the following error in a Hue or beeline session when executing drop 
database, drop table or alter table drop partition operations on a Hive Virtual 
Warehouse that is in Stopped state: 
"org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = 
InvalidACL for /llap-sasl/user-hive".
The exception appears because the Hive VW wants to evict the cache in the LLAP 
executors, but those computes in a Stopped warehouse are not running.
Note: The database or table is deleted despite the exception, only the LLAP 
executors do not flush their database or table related buffers, because these 
executors are not running.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-27911) Drop database query failing with Invalid ACL Exception

2023-11-26 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-27911:
-

Assignee: KIRTI RUGE

> Drop database query failing with Invalid ACL Exception
> --
>
> Key: HIVE-27911
> URL: https://issues.apache.org/jira/browse/HIVE-27911
> Project: Hive
>  Issue Type: Improvement
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>
> You may see the following error in a Hue or beeline session when executing 
> drop database, drop table or alter table drop partition operations on a Hive 
> Virtual Warehouse that is in Stopped state: 
> "org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = 
> InvalidACL for /llap-sasl/user-hive".
> The exception appears because the Hive VW wants to evict the cache in the 
> LLAP executors, but those computes in a Stopped warehouse are not running.
> Note: The database or table is deleted despite the exception, only the LLAP 
> executors do not flush their database or table related buffers, because these 
> executors are not running.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-27767) Copy more data into HIVE_LOCKS for better supportability

2023-10-04 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-27767:
-

Assignee: KIRTI RUGE

> Copy more data into HIVE_LOCKS for better supportability
> 
>
> Key: HIVE-27767
> URL: https://issues.apache.org/jira/browse/HIVE-27767
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>
> There is some information like ERROR_MESSAGE needs to copy to HIVE_LOCKS . It 
> would help with supportability if HIVE_LOCKS (and especially the view of it 
> in the SYS database) also contained this information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-27768) Mask patterns in q test output to avoid flakiness

2023-10-03 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-27768:
-

Assignee: KIRTI RUGE

> Mask patterns in q test output to avoid flakiness
> -
>
> Key: HIVE-27768
> URL: https://issues.apache.org/jira/browse/HIVE-27768
> Project: Hive
>  Issue Type: Improvement
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>
> Please mask filesize pattern in below q tests
> Pattern :
> Found 3 items
> drwxr-xr-x - ### USER ### ### GROUP ### 0 ### HDFS DATE ### hdfs://### HDFS 
> PATH ###
> drwxr-xr-x - ### USER ### ### GROUP ### 0 ### HDFS DATE ### hdfs://### HDFS 
> PATH ###
> drwxr-xr-x - ### USER ### ### GROUP ### 0 ### HDFS DATE ### hdfs://### HDFS 
> PATH ###
>  
>  
> Tests
> cascade_dbdrop
> cttl
> orc_merge1
> orc_merge2
> orc_merge3
> orc_merge4
> orc_merge10
> autoColumnStats_6
> flatten_union_subdir
> acid_vectorization_original_tez
> temp_table_external
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26280) Copy more data into COMPLETED_COMPACTIONS for better supportability

2023-10-03 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-26280:
-

Assignee: KIRTI RUGE  (was: Karen Coppage)

> Copy more data into COMPLETED_COMPACTIONS for better supportability
> ---
>
> Key: HIVE-26280
> URL: https://issues.apache.org/jira/browse/HIVE-26280
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Reporter: Karen Coppage
>Assignee: KIRTI RUGE
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-2
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> There is some information in COMPACTION_QUEUE that doesn't get copied over to 
> COMPLETED_COMPACTIONS when compaction completes. It would help with 
> supportability if COMPLETED_COMPACTIONS (and especially the view of it in the 
> SYS database) also contained this information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-27768) Mask patterns in q test output to avoid flakiness

2023-10-03 Thread KIRTI RUGE (Jira)
KIRTI RUGE created HIVE-27768:
-

 Summary: Mask patterns in q test output to avoid flakiness
 Key: HIVE-27768
 URL: https://issues.apache.org/jira/browse/HIVE-27768
 Project: Hive
  Issue Type: Improvement
Reporter: KIRTI RUGE


Please mask filesize pattern in below q tests

Pattern :

Found 3 items
drwxr-xr-x - ### USER ### ### GROUP ### 0 ### HDFS DATE ### hdfs://### HDFS 
PATH ###
drwxr-xr-x - ### USER ### ### GROUP ### 0 ### HDFS DATE ### hdfs://### HDFS 
PATH ###
drwxr-xr-x - ### USER ### ### GROUP ### 0 ### HDFS DATE ### hdfs://### HDFS 
PATH ###

 

 

Tests

cascade_dbdrop

cttl

orc_merge1

orc_merge2

orc_merge3

orc_merge4

orc_merge10

autoColumnStats_6

flatten_union_subdir

acid_vectorization_original_tez

temp_table_external

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-27767) Copy more data into HIVE_LOCKS for better supportability

2023-10-03 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE updated HIVE-27767:
--
Summary: Copy more data into HIVE_LOCKS for better supportability  (was: 
Copy more data into COMPLETED_COMPACTIONS for better supportability)

> Copy more data into HIVE_LOCKS for better supportability
> 
>
> Key: HIVE-27767
> URL: https://issues.apache.org/jira/browse/HIVE-27767
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: KIRTI RUGE
>Priority: Major
>
> There is some information like ERROR_MESSAGE needs to copy to HIVE_LOCKS . It 
> would help with supportability if HIVE_LOCKS (and especially the view of it 
> in the SYS database) also contained this information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-27767) Copy more data into COMPLETED_COMPACTIONS for better supportability

2023-10-03 Thread KIRTI RUGE (Jira)
KIRTI RUGE created HIVE-27767:
-

 Summary: Copy more data into COMPLETED_COMPACTIONS for better 
supportability
 Key: HIVE-27767
 URL: https://issues.apache.org/jira/browse/HIVE-27767
 Project: Hive
  Issue Type: Sub-task
  Components: Transactions
Reporter: KIRTI RUGE


There is some information like ERROR_MESSAGE needs to copy to HIVE_LOCKS . It 
would help with supportability if HIVE_LOCKS (and especially the view of it in 
the SYS database) also contained this information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-27753) Mask Q file output to avoid flakyness

2023-09-28 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE updated HIVE-27753:
--
Description: 
Mask below pattern in q output files to avoid flakyness of tests

 

drwxr-xr-x - ### USER ### ### GROUP ### 0 ### HDFS DATE ### hdfs://### HDFS 
PATH ###

  was:Mask below pattern in q output files to avoid flakyness of tests


> Mask Q file output to avoid flakyness 
> --
>
> Key: HIVE-27753
> URL: https://issues.apache.org/jira/browse/HIVE-27753
> Project: Hive
>  Issue Type: Improvement
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>
> Mask below pattern in q output files to avoid flakyness of tests
>  
> drwxr-xr-x - ### USER ### ### GROUP ### 0 ### HDFS DATE ### hdfs://### HDFS 
> PATH ###



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-27753) Mask Q file output to avoid flakyness

2023-09-28 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-27753:
-

Assignee: KIRTI RUGE

> Mask Q file output to avoid flakyness 
> --
>
> Key: HIVE-27753
> URL: https://issues.apache.org/jira/browse/HIVE-27753
> Project: Hive
>  Issue Type: Improvement
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>
> Mask below pattern in q output files to avoid flakyness of tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-27753) Mask Q file output to avoid flakyness

2023-09-28 Thread KIRTI RUGE (Jira)
KIRTI RUGE created HIVE-27753:
-

 Summary: Mask Q file output to avoid flakyness 
 Key: HIVE-27753
 URL: https://issues.apache.org/jira/browse/HIVE-27753
 Project: Hive
  Issue Type: Improvement
Reporter: KIRTI RUGE


Mask below pattern in q output files to avoid flakyness of tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-27745) Create a test which validates Schematool sanity

2023-09-27 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-27745:
-

Assignee: KIRTI RUGE

> Create a test which validates Schematool sanity 
> 
>
> Key: HIVE-27745
> URL: https://issues.apache.org/jira/browse/HIVE-27745
> Project: Hive
>  Issue Type: Improvement
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>
>  
> with each hive release we update version from pom.xml. Let us have a basic 
> sanity check test which should validates below 2 cases
>  * {{fullVersion}} without "-SNAPSHOT" is equal to {{shortVersion}}
>  * {{shortVersion}} is a prefix of {{{}fullVersion{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-27745) Create a test which validates Schematool sanity

2023-09-27 Thread KIRTI RUGE (Jira)
KIRTI RUGE created HIVE-27745:
-

 Summary: Create a test which validates Schematool sanity 
 Key: HIVE-27745
 URL: https://issues.apache.org/jira/browse/HIVE-27745
 Project: Hive
  Issue Type: Improvement
Reporter: KIRTI RUGE


 

with each hive release we update version from pom.xml. Let us have a basic 
sanity check test which should validates below 2 cases
 * {{fullVersion}} without "-SNAPSHOT" is equal to {{shortVersion}}
 * {{shortVersion}} is a prefix of {{{}fullVersion{}}}.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-27738) Fix Schematool version so that it can pickup correct schema script file after 4.0.0-beta-1 release

2023-09-26 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-27738:
-

Assignee: KIRTI RUGE

> Fix Schematool version so that it can pickup correct schema script file after 
> 4.0.0-beta-1 release
> --
>
> Key: HIVE-27738
> URL: https://issues.apache.org/jira/browse/HIVE-27738
> Project: Hive
>  Issue Type: Bug
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>
> hive.version.shortname needs to be fixed from / pom.xml and 
> standalone-metastore/pom.xml so that it should pick up xxx4.0.0-beta-2.xx.sql 
> file correctly



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-27738) Fix Schematool version so that it can pickup correct schema script file after 4.0.0-beta-1 release

2023-09-26 Thread KIRTI RUGE (Jira)
KIRTI RUGE created HIVE-27738:
-

 Summary: Fix Schematool version so that it can pickup correct 
schema script file after 4.0.0-beta-1 release
 Key: HIVE-27738
 URL: https://issues.apache.org/jira/browse/HIVE-27738
 Project: Hive
  Issue Type: Bug
Reporter: KIRTI RUGE


hive.version.shortname needs to be fixed from / pom.xml and 
standalone-metastore/pom.xml so that it should pick up xxx4.0.0-beta-2.xx.sql 
file correctly



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-23680) TestDbNotificationListener is unstable

2023-08-28 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE resolved HIVE-23680.
---
Resolution: Fixed

Thanks vegh laszlo for review

> TestDbNotificationListener is unstable
> --
>
> Key: HIVE-23680
> URL: https://issues.apache.org/jira/browse/HIVE-23680
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: KIRTI RUGE
>Priority: Major
>  Labels: pull-request-available
>
> http://34.66.156.144:8080/job/hive-precommit/job/master/35/testReport/
> http://130.211.9.232/job/hive-flaky-check/24/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-23680) TestDbNotificationListener is unstable

2023-08-28 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-23680:
-

Assignee: KIRTI RUGE  (was: Aasha Medhi)

> TestDbNotificationListener is unstable
> --
>
> Key: HIVE-23680
> URL: https://issues.apache.org/jira/browse/HIVE-23680
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: KIRTI RUGE
>Priority: Major
>  Labels: pull-request-available
>
> http://34.66.156.144:8080/job/hive-precommit/job/master/35/testReport/
> http://130.211.9.232/job/hive-flaky-check/24/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-27529) Add dictionary encoding support for parquet decimal types

2023-07-26 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE updated HIVE-27529:
--
Summary: Add dictionary encoding support for parquet decimal types  (was: 
Support dictionary encoding support for parquet decimal types)

> Add dictionary encoding support for parquet decimal types
> -
>
> Key: HIVE-27529
> URL: https://issues.apache.org/jira/browse/HIVE-27529
> Project: Hive
>  Issue Type: Improvement
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
> Attachments: image-2023-07-26-15-18-55-480.png
>
>
> Parquet does not support any dictionary of type decimal . At present, 
> supported
> PlainValuesDictionary types are 
> PlainFloatDictionary
> PlainIntegerDictionary
> PlainDoubleDictionary
> PlainLongDictionary
> PlainBinaryDictionary
>  
> Whenever there is conversion from any physical(any 
> primitive)/logical(decimal) parquet type to hive decimal type, dictionary 
> support needs to be considered.
> We need to investigate further on how this support can be implemented from 
> hive side, or at parquet side if needed
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-27529) Support dictionary encoding support for parquet decimal types

2023-07-26 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE updated HIVE-27529:
--
Summary: Support dictionary encoding support for parquet decimal types  
(was: Support dictionary encoding in parquet for decimal types)

> Support dictionary encoding support for parquet decimal types
> -
>
> Key: HIVE-27529
> URL: https://issues.apache.org/jira/browse/HIVE-27529
> Project: Hive
>  Issue Type: Improvement
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
> Attachments: image-2023-07-26-15-18-55-480.png
>
>
> Parquet does not support any dictionary of type decimal . At present, 
> supported
> PlainValuesDictionary types are 
> PlainFloatDictionary
> PlainIntegerDictionary
> PlainDoubleDictionary
> PlainLongDictionary
> PlainBinaryDictionary
>  
> Whenever there is conversion from any physical(any 
> primitive)/logical(decimal) parquet type to hive decimal type, dictionary 
> support needs to be considered.
> We need to investigate further on how this support can be implemented from 
> hive side, or at parquet side if needed
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HIVE-27529) Support dictionary encoding in parquet for decimal types

2023-07-26 Thread KIRTI RUGE (Jira)
KIRTI RUGE created HIVE-27529:
-

 Summary: Support dictionary encoding in parquet for decimal types
 Key: HIVE-27529
 URL: https://issues.apache.org/jira/browse/HIVE-27529
 Project: Hive
  Issue Type: Improvement
Reporter: KIRTI RUGE
 Attachments: image-2023-07-26-15-18-55-480.png

Parquet does not support any dictionary of type decimal . At present, supported

PlainValuesDictionary types are 

PlainFloatDictionary

PlainIntegerDictionary

PlainDoubleDictionary

PlainLongDictionary

PlainBinaryDictionary

 

Whenever there is conversion from any physical(any primitive)/logical(decimal) 
parquet type to hive decimal type, dictionary support needs to be considered.

We need to investigate further on how this support can be implemented from hive 
side, or at parquet side if needed

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-27529) Support dictionary encoding in parquet for decimal types

2023-07-26 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-27529:
-

Assignee: KIRTI RUGE

> Support dictionary encoding in parquet for decimal types
> 
>
> Key: HIVE-27529
> URL: https://issues.apache.org/jira/browse/HIVE-27529
> Project: Hive
>  Issue Type: Improvement
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
> Attachments: image-2023-07-26-15-18-55-480.png
>
>
> Parquet does not support any dictionary of type decimal . At present, 
> supported
> PlainValuesDictionary types are 
> PlainFloatDictionary
> PlainIntegerDictionary
> PlainDoubleDictionary
> PlainLongDictionary
> PlainBinaryDictionary
>  
> Whenever there is conversion from any physical(any 
> primitive)/logical(decimal) parquet type to hive decimal type, dictionary 
> support needs to be considered.
> We need to investigate further on how this support can be implemented from 
> hive side, or at parquet side if needed
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-23985) flaky TestMiniHiveKafkaCliDriver.testCliDriver[kafka_storage_handler]

2023-06-05 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-23985:
-

Assignee: Stamatis Zampetakis

> flaky TestMiniHiveKafkaCliDriver.testCliDriver[kafka_storage_handler]
> -
>
> Key: HIVE-23985
> URL: https://issues.apache.org/jira/browse/HIVE-23985
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Stamatis Zampetakis
>Priority: Major
>
> http://ci.hive.apache.org/job/hive-precommit/job/master/144/testReport/junit/org.apache.hadoop.hive.cli/TestMiniHiveKafkaCliDriver/Testing___split_16___Archive___testCliDriver_kafka_storage_handler_/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-23985) flaky TestMiniHiveKafkaCliDriver.testCliDriver[kafka_storage_handler]

2023-06-05 Thread KIRTI RUGE (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-23985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729270#comment-17729270
 ] 

KIRTI RUGE commented on HIVE-23985:
---

Thanks [~zabetak] . I have added this ticket in your queue.

> flaky TestMiniHiveKafkaCliDriver.testCliDriver[kafka_storage_handler]
> -
>
> Key: HIVE-23985
> URL: https://issues.apache.org/jira/browse/HIVE-23985
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Stamatis Zampetakis
>Priority: Major
>
> http://ci.hive.apache.org/job/hive-precommit/job/master/144/testReport/junit/org.apache.hadoop.hive.cli/TestMiniHiveKafkaCliDriver/Testing___split_16___Archive___testCliDriver_kafka_storage_handler_/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-23985) flaky TestMiniHiveKafkaCliDriver.testCliDriver[kafka_storage_handler]

2023-06-05 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-23985:
-

Assignee: (was: KIRTI RUGE)

> flaky TestMiniHiveKafkaCliDriver.testCliDriver[kafka_storage_handler]
> -
>
> Key: HIVE-23985
> URL: https://issues.apache.org/jira/browse/HIVE-23985
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Priority: Major
>
> http://ci.hive.apache.org/job/hive-precommit/job/master/144/testReport/junit/org.apache.hadoop.hive.cli/TestMiniHiveKafkaCliDriver/Testing___split_16___Archive___testCliDriver_kafka_storage_handler_/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-23985) flaky TestMiniHiveKafkaCliDriver.testCliDriver[kafka_storage_handler]

2023-04-09 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-23985:
-

Assignee: KIRTI RUGE

> flaky TestMiniHiveKafkaCliDriver.testCliDriver[kafka_storage_handler]
> -
>
> Key: HIVE-23985
> URL: https://issues.apache.org/jira/browse/HIVE-23985
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: KIRTI RUGE
>Priority: Major
>
> http://ci.hive.apache.org/job/hive-precommit/job/master/144/testReport/junit/org.apache.hadoop.hive.cli/TestMiniHiveKafkaCliDriver/Testing___split_16___Archive___testCliDriver_kafka_storage_handler_/



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-23548) TestActivePassiveHA is unstable

2023-04-09 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-23548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-23548:
-

Assignee: KIRTI RUGE

> TestActivePassiveHA is unstable
> ---
>
> Key: HIVE-23548
> URL: https://issues.apache.org/jira/browse/HIVE-23548
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: KIRTI RUGE
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-27213) parquet logical decimal type to INT32 is not working while compute statastics

2023-04-03 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE updated HIVE-27213:
--
Description: 
test.parquet

Steps to reproduce:

dfs ${system:test.dfs.mkdir} hdfs:///tmp/dwxtest/ws_sold_date_sk=2451825;
dfs -copyFromLocal ../../data/files/dwxtest.parquet 
hdfs:///tmp/dwxtest/ws_sold_date_sk=2451825;
dfs -ls hdfs:///tmp/dwxtest/ws_sold_date_sk=2451825/;

CREATE EXTERNAL TABLE `web_sales`(
`ws_sold_time_sk` int,
`ws_ship_date_sk` int,
`ws_item_sk` int,
`ws_bill_customer_sk` int,
`ws_bill_cdemo_sk` int,
`ws_bill_hdemo_sk` int,
`ws_bill_addr_sk` int,
`ws_ship_customer_sk` int,
`ws_ship_cdemo_sk` int,
`ws_ship_hdemo_sk` int,
`ws_ship_addr_sk` int,
`ws_web_page_sk` int,
`ws_web_site_sk` int,
`ws_ship_mode_sk` int,
`ws_warehouse_sk` int,
`ws_promo_sk` int,
`ws_order_number` bigint,
`ws_quantity` int,
`ws_wholesale_cost` decimal(7,2),
`ws_list_price` decimal(7,2),
`ws_sales_price` decimal(7,2),
`ws_ext_discount_amt` decimal(7,2),
`ws_ext_sales_price` decimal(7,2),
`ws_ext_wholesale_cost` decimal(7,2),
`ws_ext_list_price` decimal(7,2),
`ws_ext_tax` decimal(7,2),
`ws_coupon_amt` decimal(7,2),
`ws_ext_ship_cost` decimal(7,2),
`ws_net_paid` decimal(7,2),
`ws_net_paid_inc_tax` decimal(7,2),
`ws_net_paid_inc_ship` decimal(7,2),
`ws_net_paid_inc_ship_tax` decimal(7,2),
`ws_net_profit` decimal(7,2))
PARTITIONED BY (
`ws_sold_date_sk` int)
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
STORED AS PARQUET LOCATION 'hdfs:///tmp/dwxtest/';

MSCK REPAIR TABLE web_sales;

analyze table web_sales compute statistics for columns;

 


Error Stack:

 


analyze table web_sales compute statistics for columns;

], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : 
attempt_1678779198717__2_00_52_3:java.lang.RuntimeException: 
java.lang.RuntimeException: java.io.IOException: 
org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in 
block -1 in file 
s3a://xx/useDecimal=true,useDate=true,filterNull=false/web_sales/ws_sold_date_sk=2451825/part-00796-788bef86-2748-4e21-a464-b34c7e646c94-cfcafd2c-2abd-4067-8aea-f58cb1021b35.c000.snappy.parquet
    at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:351)
    at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:280)
    at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
    at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:84)
    at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:70)
    at java.base/java.security.AccessController.doPrivileged(Native Method)
    at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
    at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
    at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:70)
    at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:40)
    at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
    at 
org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:118)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.RuntimeException: java.io.IOException: 
org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in 
block -1 in file 
s3a://xxx/useDecimal=true,useDate=true,filterNull=false/web_sales/ws_sold_date_sk=2451825/part-00796-788bef86-2748-4e21-a464-b34c7e646c94-cfcafd2c-2abd-4067-8aea-f58cb1021b35.c000.snappy.parquet
    at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
    at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(TezGroupedSplitsInputFormat.java:145)
    at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:111)
    at 
org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:164)
    at 
org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83)
    at 
org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:704)
    at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:663)
    at 
org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:150)
    at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:114)
    at 

[jira] [Updated] (HIVE-27213) parquet logical decimal type to INT32 is not working while compute statastics

2023-04-03 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE updated HIVE-27213:
--
Description: test.parquetSteps to reproduce:dfs ${system:test.dfs.mkdir} 
hdfs:///tmp/dwxtest/ws_sold_date_sk=2451825; dfs -copyFromLocal 
../../data/files/dwxtest.parquet hdfs:///tmp/dwxtest/ws_sold_date_sk=2451825; 
dfs -ls hdfs:///tmp/dwxtest/ws_sold_date_sk=2451825/;CREATE EXTERNAL TABLE 
`web_sales`( `ws_sold_time_sk` int, `ws_ship_date_sk` int, `ws_item_sk` int, 
`ws_bill_customer_sk` int, `ws_bill_cdemo_sk` int, `ws_bill_hdemo_sk` int, 
`ws_bill_addr_sk` int, `ws_ship_customer_sk` int, `ws_ship_cdemo_sk` int, 
`ws_ship_hdemo_sk` int, `ws_ship_addr_sk` int, `ws_web_page_sk` int, 
`ws_web_site_sk` int, `ws_ship_mode_sk` int, `ws_warehouse_sk` int, 
`ws_promo_sk` int, `ws_order_number` bigint, `ws_quantity` int, 
`ws_wholesale_cost` decimal(7,2), `ws_list_price` decimal(7,2), 
`ws_sales_price` decimal(7,2), `ws_ext_discount_amt` decimal(7,2), 
`ws_ext_sales_price` decimal(7,2), `ws_ext_wholesale_cost` decimal(7,2), 
`ws_ext_list_price` decimal(7,2), `ws_ext_tax` decimal(7,2), `ws_coupon_amt` 
decimal(7,2), `ws_ext_ship_cost` decimal(7,2), `ws_net_paid` decimal(7,2), 
`ws_net_paid_inc_tax` decimal(7,2), `ws_net_paid_inc_ship` decimal(7,2), 
`ws_net_paid_inc_ship_tax` decimal(7,2), `ws_net_profit` decimal(7,2)) 
PARTITIONED BY ( `ws_sold_date_sk` int) ROW FORMAT SERDE 
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' STORED AS PARQUET 
LOCATION 'hdfs:///tmp/dwxtest/';MSCK REPAIR TABLE web_sales;analyze table 
web_sales compute statistics for columns; Error Stack: analyze table web_sales 
compute statistics for columns;], TaskAttempt 3 failed, info=[Error: Error 
while running task ( failure ) : 
attempt_1678779198717__2_00_52_3:java.lang.RuntimeException: 
java.lang.RuntimeException: java.io.IOException: 
org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in 
block -1 in file 
s3a://xx/useDecimal=true,useDate=true,filterNull=false/web_sales/ws_sold_date_sk=2451825/part-00796-788bef86-2748-4e21-a464-b34c7e646c94-cfcafd2c-2abd-4067-8aea-f58cb1021b35.c000.snappy.parquet
     at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:351)
     at 
org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:280)     
at 
org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
     at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:84)
     at 
org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:70)
     at java.base/java.security.AccessController.doPrivileged(Native Method)    
 at java.base/javax.security.auth.Subject.doAs(Subject.java:423)     at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
     at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:70)
     at 
org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:40)
     at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)     
at 
org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:118)
     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)     
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
     at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
     at java.base/java.lang.Thread.run(Thread.java:829) Caused by: 
java.lang.RuntimeException: java.io.IOException: 
org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in 
block -1 in file 
s3a://xxx/useDecimal=true,useDate=true,filterNull=false/web_sales/ws_sold_date_sk=2451825/part-00796-788bef86-2748-4e21-a464-b34c7e646c94-cfcafd2c-2abd-4067-8aea-f58cb1021b35.c000.snappy.parquet
     at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
     at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(TezGroupedSplitsInputFormat.java:145)
     at 
org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:111)
     at 
org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:164)
     at 
org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83)    
 at 
org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:704)  
   at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:663)    
 at 
org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:150)
     at 

[jira] [Assigned] (HIVE-27213) parquet logical decimal type to INT32 is not working while compute statastics

2023-04-03 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-27213:
-

Assignee: KIRTI RUGE

> parquet logical decimal type to INT32 is not working while compute statastics
> -
>
> Key: HIVE-27213
> URL: https://issues.apache.org/jira/browse/HIVE-27213
> Project: Hive
>  Issue Type: Improvement
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
> Attachments: test.parquet
>
>
> [^test.parquet]
> Steps to reproduce:
> dfs ${system:test.dfs.mkdir} hdfs:///tmp/dwxtest/ws_sold_date_sk=2451825;
> dfs -copyFromLocal ../../data/files/dwxtest.parquet 
> hdfs:///tmp/dwxtest/ws_sold_date_sk=2451825;
> dfs -ls hdfs:///tmp/dwxtest/ws_sold_date_sk=2451825/;
> CREATE EXTERNAL TABLE `web_sales`(
> `ws_sold_time_sk` int,
> `ws_ship_date_sk` int,
> `ws_item_sk` int,
> `ws_bill_customer_sk` int,
> `ws_bill_cdemo_sk` int,
> `ws_bill_hdemo_sk` int,
> `ws_bill_addr_sk` int,
> `ws_ship_customer_sk` int,
> `ws_ship_cdemo_sk` int,
> `ws_ship_hdemo_sk` int,
> `ws_ship_addr_sk` int,
> `ws_web_page_sk` int,
> `ws_web_site_sk` int,
> `ws_ship_mode_sk` int,
> `ws_warehouse_sk` int,
> `ws_promo_sk` int,
> `ws_order_number` bigint,
> `ws_quantity` int,
> `ws_wholesale_cost` decimal(7,2),
> `ws_list_price` decimal(7,2),
> `ws_sales_price` decimal(7,2),
> `ws_ext_discount_amt` decimal(7,2),
> `ws_ext_sales_price` decimal(7,2),
> `ws_ext_wholesale_cost` decimal(7,2),
> `ws_ext_list_price` decimal(7,2),
> `ws_ext_tax` decimal(7,2),
> `ws_coupon_amt` decimal(7,2),
> `ws_ext_ship_cost` decimal(7,2),
> `ws_net_paid` decimal(7,2),
> `ws_net_paid_inc_tax` decimal(7,2),
> `ws_net_paid_inc_ship` decimal(7,2),
> `ws_net_paid_inc_ship_tax` decimal(7,2),
> `ws_net_profit` decimal(7,2))
> PARTITIONED BY (
> `ws_sold_date_sk` int)
> ROW FORMAT SERDE
> 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
> STORED AS PARQUET LOCATION 'hdfs:///tmp/dwxtest/';
> MSCK REPAIR TABLE web_sales;
> analyze table web_sales compute statistics for columns;
>  
> Error Stack:
>  
> {noformat}
> analyze table web_sales compute statistics for columns;
> ], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : 
> attempt_1678779198717__2_00_52_3:java.lang.RuntimeException: 
> java.lang.RuntimeException: java.io.IOException: 
> org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in 
> block -1 in file 
> s3a://nfqe-tpcds-test/spark-tpcds/sf1000-parquet/useDecimal=true,useDate=true,filterNull=false/web_sales/ws_sold_date_sk=2451825/part-00796-788bef86-2748-4e21-a464-b34c7e646c94-cfcafd2c-2abd-4067-8aea-f58cb1021b35.c000.snappy.parquet
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:351)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:280)
>   at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:84)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:70)
>   at java.base/java.security.AccessController.doPrivileged(Native Method)
>   at java.base/javax.security.auth.Subject.doAs(Subject.java:423)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:70)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:40)
>   at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
>   at 
> org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:118)
>   at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>   at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>   at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: java.lang.RuntimeException: java.io.IOException: 
> org.apache.parquet.io.ParquetDecodingException: Can not read value at 0 in 
> block -1 in file 
> s3a://nfqe-tpcds-test/spark-tpcds/sf1000-parquet/useDecimal=true,useDate=true,filterNull=false/web_sales/ws_sold_date_sk=2451825/part-00796-788bef86-2748-4e21-a464-b34c7e646c94-cfcafd2c-2abd-4067-8aea-f58cb1021b35.c000.snappy.parquet
>   at 
> org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
>   at 
> 

[jira] [Updated] (HIVE-27166) Introduce Apache Commons DBUtils to handle boilerplate code

2023-03-22 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE updated HIVE-27166:
--
Description: 
Apache Commons DbUtils is a small library that makes working with JDBC a lot 
easier.

Currently scope of this Jira is introducing Apache DBUtils latest version  and 
code change for applicable methods in TxnHandler and CompactionTxnHandler 
classes.

  was:
Apache Commons DbUtils is a small library that makes working with JDBC a lot 
easier.

Currently scope of this Jira is introducing Apache DBUtils latest version for 
applicable methods in TxnHandler and CompactionTxnHandler classes.


> Introduce Apache Commons DBUtils to handle boilerplate code
> ---
>
> Key: HIVE-27166
> URL: https://issues.apache.org/jira/browse/HIVE-27166
> Project: Hive
>  Issue Type: Improvement
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>
> Apache Commons DbUtils is a small library that makes working with JDBC a lot 
> easier.
> Currently scope of this Jira is introducing Apache DBUtils latest version  
> and code change for applicable methods in TxnHandler and CompactionTxnHandler 
> classes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-27166) Introduce Apache Commons DBUtils to handle boilerplate code

2023-03-22 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-27166:
-

Assignee: KIRTI RUGE

> Introduce Apache Commons DBUtils to handle boilerplate code
> ---
>
> Key: HIVE-27166
> URL: https://issues.apache.org/jira/browse/HIVE-27166
> Project: Hive
>  Issue Type: Improvement
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>
> Apache Commons DbUtils is a small library that makes working with JDBC a lot 
> easier.
> Currently scope of this Jira is introducing Apache DBUtils latest version for 
> applicable methods in TxnHandler and CompactionTxnHandler classes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-24429) Figure out a better way to test failed compactions

2023-03-05 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-24429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-24429:
-

Assignee: KIRTI RUGE

> Figure out a better way to test failed compactions
> --
>
> Key: HIVE-24429
> URL: https://issues.apache.org/jira/browse/HIVE-24429
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: KIRTI RUGE
>Priority: Major
>
> This block is executed during compaction: 
> {code:java}
> if(conf.getBoolVar(HiveConf.ConfVars.HIVE_IN_TEST) && 
> conf.getBoolVar(HiveConf.ConfVars.HIVETESTMODEFAILCOMPACTION)) {
>  throw new 
> RuntimeException(HiveConf.ConfVars.HIVETESTMODEFAILCOMPACTION.name() + 
> "=true");
> }{code}
> We should figure out a better way to test failed compaction than including 
> test code in the source.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-27085) Revert Manual constructor from AbortCompactionResponseElement

2023-02-16 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE updated HIVE-27085:
--
Summary: Revert Manual constructor from AbortCompactionResponseElement  
(was: Revert Manual constructor from AbortCompactionResponseElement.java)

> Revert Manual constructor from AbortCompactionResponseElement
> -
>
> Key: HIVE-27085
> URL: https://issues.apache.org/jira/browse/HIVE-27085
> Project: Hive
>  Issue Type: Bug
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-27085) Revert Manual constructor from AbortCompactionResponseElement.java

2023-02-16 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-27085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-27085:
-

Assignee: KIRTI RUGE

> Revert Manual constructor from AbortCompactionResponseElement.java
> --
>
> Key: HIVE-27085
> URL: https://issues.apache.org/jira/browse/HIVE-27085
> Project: Hive
>  Issue Type: Bug
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26858) OOM / high GC caused by purgeCompactionHistory

2023-02-14 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE updated HIVE-26858:
--
Summary: OOM / high GC caused by purgeCompactionHistory  (was: OOM / high 
GC caused by showCompactions & purgeCompactionHistory)

> OOM / high GC caused by purgeCompactionHistory
> --
>
> Key: HIVE-26858
> URL: https://issues.apache.org/jira/browse/HIVE-26858
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>
> If for some reason housekeeper service wasn't running, when activated it 
> could cause OOM. showCompactions & purgeCompactionHistory loads the complete 
> history of events into the heap that should be reviewed.
> purgeCompactionHistory might be refactored with batching or replaced with a 
> complex delete query instead of `select` first and then `delete`.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (HIVE-26804) Cancel Compactions in initiated state

2023-02-13 Thread KIRTI RUGE (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-26804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17687811#comment-17687811
 ] 

KIRTI RUGE edited comment on HIVE-26804 at 2/13/23 8:57 AM:


Thanks László Végh  for review. The patch has been merged to upstream


was (Author: JIRAUSER294595):
Thanks Laszlo V for review. The patch has been merged to upstream

> Cancel Compactions in initiated state
> -
>
> Key: HIVE-26804
> URL: https://issues.apache.org/jira/browse/HIVE-26804
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-26804) Cancel Compactions in initiated state

2023-02-13 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE resolved HIVE-26804.
---
Resolution: Fixed

> Cancel Compactions in initiated state
> -
>
> Key: HIVE-26804
> URL: https://issues.apache.org/jira/browse/HIVE-26804
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26804) Cancel Compactions in initiated state

2023-02-13 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE updated HIVE-26804:
--
Fix Version/s: 4.0.0

> Cancel Compactions in initiated state
> -
>
> Key: HIVE-26804
> URL: https://issues.apache.org/jira/browse/HIVE-26804
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-26804) Cancel Compactions in initiated state

2023-02-13 Thread KIRTI RUGE (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-26804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17687811#comment-17687811
 ] 

KIRTI RUGE commented on HIVE-26804:
---

Thanks Laszlo V for review. The patch has been merged to upstream

> Cancel Compactions in initiated state
> -
>
> Key: HIVE-26804
> URL: https://issues.apache.org/jira/browse/HIVE-26804
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26858) OOM / high GC caused by showCompactions & purgeCompactionHistory

2023-02-12 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-26858:
-

Assignee: KIRTI RUGE

> OOM / high GC caused by showCompactions & purgeCompactionHistory
> 
>
> Key: HIVE-26858
> URL: https://issues.apache.org/jira/browse/HIVE-26858
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>
> If for some reason housekeeper service wasn't running, when activated it 
> could cause OOM. showCompactions & purgeCompactionHistory loads the complete 
> history of events into the heap that should be reviewed.
> purgeCompactionHistory might be refactored with batching or replaced with a 
> complex delete query instead of `select` first and then `delete`.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-26844) hive 执行 insert overwrite 中带有 GET_JSON_OBJECT 报NoClassDefFoundError: org/codehaus/jackson/annotate/JsonClass问题

2022-12-15 Thread KIRTI RUGE (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-26844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17647916#comment-17647916
 ] 

KIRTI RUGE commented on HIVE-26844:
---

Is it reproducible in HIVE 4.0.0?

> hive 执行 insert overwrite 中带有 GET_JSON_OBJECT 报NoClassDefFoundError: 
> org/codehaus/jackson/annotate/JsonClass问题
> -
>
> Key: HIVE-26844
> URL: https://issues.apache.org/jira/browse/HIVE-26844
> Project: Hive
>  Issue Type: Bug
> Environment: hive版本:2.3.7
> hadoop版本:2.7.2
>Reporter: yutiantian
>Priority: Major
>
> hive版本:2.3.7
> hadoop版本:2.7.2
> 通过hive 执行 INSERT OVERWRITE TABLE test.test_temp PARTITION(dt = '2021-07-01') 
> SELECT null AS local_time,null AS 
> local_date,TRIM(GET_JSON_OBJECT(request_body, '$.app_version')) AS 
> app_version FROM test.test_ods_checklog WHERE dt = '2021-07-01' limit 1; 
> 会报如下错误:
> Caused by: java.lang.NoClassDefFoundError: 
> org/codehaus/jackson/annotate/JsonClass
>     at 
> org.codehaus.jackson.map.introspect.JacksonAnnotationIntrospector.findDeserializationType(JacksonAnnotationIntrospector.java:524)
>     at 
> org.codehaus.jackson.map.deser.BasicDeserializerFactory.modifyTypeByAnnotation(BasicDeserializerFactory.java:732)
>     at 
> org.codehaus.jackson.map.deser.BasicDeserializerFactory.createMapDeserializer(BasicDeserializerFactory.java:337)
>     at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createDeserializer(StdDeserializerProvider.java:377)
>     at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCache2(StdDeserializerProvider.java:307)
>     at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCacheValueDeserializer(StdDeserializerProvider.java:287)
>     at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findValueDeserializer(StdDeserializerProvider.java:136)
>     at 
> org.codehaus.jackson.map.deser.StdDeserializerProvider.findTypedValueDeserializer(StdDeserializerProvider.java:157)
>     at 
> org.codehaus.jackson.map.ObjectMapper._findRootDeserializer(ObjectMapper.java:2468)
>     at 
> org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
>     at org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1616)
>     at org.apache.hadoop.hive.ql.udf.UDFJson.evaluate(UDFJson.java:170)
>     ... 27 more
> 但是,单独执行 SELECT null AS local_time,null AS 
> local_date,TRIM(GET_JSON_OBJECT(request_body, '$.app_version')) AS 
> app_version FROM test.test_ods_checklog WHERE dt = '2021-07-01' limit 1; 
> 可以正常执行。
> 请问,这是哪里的问题呢,感谢回复。



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26825) Compactor: Cleaner shouldn't fetch table details again and again for partitioned tables

2022-12-08 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-26825:
-

Assignee: KIRTI RUGE

> Compactor: Cleaner shouldn't fetch table details again and again for 
> partitioned tables
> ---
>
> Key: HIVE-26825
> URL: https://issues.apache.org/jira/browse/HIVE-26825
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>
> Cleaner shouldn't be fetch table/partition details for all its partitions. 
> When there are large number of databases/tables, it takes lot of time for 
> Initiator to complete its initial iteration and load on DB also goes higher.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26805) Cancel ongoing/working compaction requests

2022-12-02 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-26805:
-

Assignee: KIRTI RUGE

> Cancel ongoing/working compaction requests
> --
>
> Key: HIVE-26805
> URL: https://issues.apache.org/jira/browse/HIVE-26805
> Project: Hive
>  Issue Type: New Feature
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-26803) Ability to cancel compactions

2022-12-02 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE updated HIVE-26803:
--
Description: 
This has to take care of below tasks:
 # Cancel compaction in initiated state
 # Cancel ongoing/working comacions
 # Gracefully handle compaction request when HS2 shuts down 

> Ability to cancel compactions
> -
>
> Key: HIVE-26803
> URL: https://issues.apache.org/jira/browse/HIVE-26803
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>
> This has to take care of below tasks:
>  # Cancel compaction in initiated state
>  # Cancel ongoing/working comacions
>  # Gracefully handle compaction request when HS2 shuts down 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26804) Cancel Compactions in initiated state

2022-12-02 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-26804:
-

Assignee: KIRTI RUGE

> Cancel Compactions in initiated state
> -
>
> Key: HIVE-26804
> URL: https://issues.apache.org/jira/browse/HIVE-26804
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26803) Ability to cancel compactions

2022-12-02 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-26803:
-

Assignee: KIRTI RUGE

> Ability to cancel compactions
> -
>
> Key: HIVE-26803
> URL: https://issues.apache.org/jira/browse/HIVE-26803
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-26085) "getTableObjectByName method should ignore it" exception doesn't include cause

2022-11-22 Thread KIRTI RUGE (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-26085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17637224#comment-17637224
 ] 

KIRTI RUGE commented on HIVE-26085:
---

I have tried reproducing this on hive latest master and I get proper 
HiveAccessControlExeption like below

FAILED: HiveAccessControlException Permission denied: Principal [name=user4, 
type=USER] does not have following privileges for operation CREATEVIEW [[OBJECT 
OWNERSHIP] on Object [type=DATABASE, name=db1], [SELECT with grant] on Object 
[type=TABLE_OR_VIEW, name=db1.tab1]]

> "getTableObjectByName method should ignore it" exception doesn't include cause
> --
>
> Key: HIVE-26085
> URL: https://issues.apache.org/jira/browse/HIVE-26085
> Project: Hive
>  Issue Type: Improvement
>  Components: Parser
> Environment: HDP 3.1.5
>Reporter: Wataru Yukawa
>Priority: Major
>
> current logic doesn't contain cause
> https://github.com/apache/hive/blob/a6e93633dc15aba179fb6ad422be4cbc88adf071/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java#L12208
> but
> {code}
> throw new SemanticException("Got exception though getTableObjectByName method 
> should ignore it", e)
> {code}
> seems better for troubleshooting.
> We encounter this issue when user access hive view where user doesn't hive 
> permission in original hive table.
> {code}
> create view aaa_view
> as
> select ... from aaa
> {code}
> "getTableObjectByName" exception happens when user try to access "aaa_view" 
> but doesn't have permission "aaa" table with apache ranger.
> {code}
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.parse.SemanticException:Got exception though 
> getTableObjectByName method should ignore it
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.walkASTMarkTABREF(SemanticAnalyzer.java:12020)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.rewriteASTWithMaskAndFilter(SemanticAnalyzer.java:12139)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.replaceViewReferenceWithDefinition(SemanticAnalyzer.java:2608)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2192)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2103)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2257)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2088)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:12234)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12328)
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:367)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:290)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:664)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1870)
> at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1817)
> at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1812)
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:197)
> ... 26 common frames omitted
> {code}
> In this case, we can't see error log like "permission denied" when user try 
> to access "aaa_view".
> So, it would be nice to add cause



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26085) "getTableObjectByName method should ignore it" exception doesn't include cause

2022-11-22 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-26085:
-

Assignee: (was: KIRTI RUGE)

> "getTableObjectByName method should ignore it" exception doesn't include cause
> --
>
> Key: HIVE-26085
> URL: https://issues.apache.org/jira/browse/HIVE-26085
> Project: Hive
>  Issue Type: Improvement
>  Components: Parser
> Environment: HDP 3.1.5
>Reporter: Wataru Yukawa
>Priority: Major
>
> current logic doesn't contain cause
> https://github.com/apache/hive/blob/a6e93633dc15aba179fb6ad422be4cbc88adf071/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java#L12208
> but
> {code}
> throw new SemanticException("Got exception though getTableObjectByName method 
> should ignore it", e)
> {code}
> seems better for troubleshooting.
> We encounter this issue when user access hive view where user doesn't hive 
> permission in original hive table.
> {code}
> create view aaa_view
> as
> select ... from aaa
> {code}
> "getTableObjectByName" exception happens when user try to access "aaa_view" 
> but doesn't have permission "aaa" table with apache ranger.
> {code}
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.parse.SemanticException:Got exception though 
> getTableObjectByName method should ignore it
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.walkASTMarkTABREF(SemanticAnalyzer.java:12020)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.rewriteASTWithMaskAndFilter(SemanticAnalyzer.java:12139)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.replaceViewReferenceWithDefinition(SemanticAnalyzer.java:2608)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2192)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2103)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2257)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2088)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:12234)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12328)
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:367)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:290)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:664)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1870)
> at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1817)
> at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1812)
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:197)
> ... 26 common frames omitted
> {code}
> In this case, we can't see error log like "permission denied" when user try 
> to access "aaa_view".
> So, it would be nice to add cause



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26764) Show compaction request should have all filds optional

2022-11-19 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-26764:
-

Assignee: KIRTI RUGE

> Show compaction request should have all filds optional
> --
>
> Key: HIVE-26764
> URL: https://issues.apache.org/jira/browse/HIVE-26764
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 4.0.0-alpha-2
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26085) "getTableObjectByName method should ignore it" exception doesn't include cause

2022-11-08 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-26085:
-

Assignee: KIRTI RUGE

> "getTableObjectByName method should ignore it" exception doesn't include cause
> --
>
> Key: HIVE-26085
> URL: https://issues.apache.org/jira/browse/HIVE-26085
> Project: Hive
>  Issue Type: Improvement
>  Components: Parser
> Environment: HDP 3.1.5
>Reporter: Wataru Yukawa
>Assignee: KIRTI RUGE
>Priority: Major
>
> current logic doesn't contain cause
> https://github.com/apache/hive/blob/a6e93633dc15aba179fb6ad422be4cbc88adf071/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java#L12208
> but
> {code}
> throw new SemanticException("Got exception though getTableObjectByName method 
> should ignore it", e)
> {code}
> seems better for troubleshooting.
> We encounter this issue when user access hive view where user doesn't hive 
> permission in original hive table.
> {code}
> create view aaa_view
> as
> select ... from aaa
> {code}
> "getTableObjectByName" exception happens when user try to access "aaa_view" 
> but doesn't have permission "aaa" table with apache ranger.
> {code}
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.parse.SemanticException:Got exception though 
> getTableObjectByName method should ignore it
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.walkASTMarkTABREF(SemanticAnalyzer.java:12020)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.rewriteASTWithMaskAndFilter(SemanticAnalyzer.java:12139)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.replaceViewReferenceWithDefinition(SemanticAnalyzer.java:2608)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2192)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2103)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2257)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2088)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:12234)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12328)
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:367)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:290)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:664)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1870)
> at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1817)
> at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1812)
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:197)
> ... 26 common frames omitted
> {code}
> In this case, we can't see error log like "permission denied" when user try 
> to access "aaa_view".
> So, it would be nice to add cause



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-25483) TxnHandler::acquireLock should close the DB conn to avoid connection leaks

2022-11-08 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE resolved HIVE-25483.
---
Resolution: Fixed

> TxnHandler::acquireLock should close the DB conn to avoid connection leaks
> --
>
> Key: HIVE-25483
> URL: https://issues.apache.org/jira/browse/HIVE-25483
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: KIRTI RUGE
>Priority: Major
>
> TxnHandler::acquireLock should close DB connection on exiting the function. 
> [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L5688]
> [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L5726]
> [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L5737-L5740]
>  If there are any exceptions downstream, this connection isn't closed 
> cleanly. In a corner case, hikari connection leak detector reported the 
> following
> {noformat}
> 2021-08-26 09:19:18,102 WARN  com.zaxxer.hikari.pool.ProxyLeakTask: 
> [HikariPool-4 housekeeper]: Connection leak detection triggered for 
> org.postgresql.jdbc.PgConnection@77f76747, stack trace follows
> java.lang.Exception: Apparent connection leak detected
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.getDbConn(TxnHandler.java:3843)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.acquireLock(TxnHandler.java:5135)
>  
> at 
> org.apache.hadoop.hive.ql.txn.compactor.Initiator.run(Initiator.java:107) 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (HIVE-25483) TxnHandler::acquireLock should close the DB conn to avoid connection leaks

2022-11-08 Thread KIRTI RUGE (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17629600#comment-17629600
 ] 

KIRTI RUGE edited comment on HIVE-25483 at 11/8/22 12:15 PM:
-

This has been taken care already.

TxnHandler.acquireLock() closes all DB connections .

This has been fixed as a part of 

[*HIVE-24236: Fixed possible Connection leaks in TxnHandler (Yongzhi Chen, 
reviewed by Denys Kuzmenko)*|https://github.com/apache/hive/pull/1559/files#top]


was (Author: JIRAUSER294595):
This has been taken care already.

TxnHandler.acquireLock() closes all DB connections .

> TxnHandler::acquireLock should close the DB conn to avoid connection leaks
> --
>
> Key: HIVE-25483
> URL: https://issues.apache.org/jira/browse/HIVE-25483
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: KIRTI RUGE
>Priority: Major
>
> TxnHandler::acquireLock should close DB connection on exiting the function. 
> [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L5688]
> [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L5726]
> [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L5737-L5740]
>  If there are any exceptions downstream, this connection isn't closed 
> cleanly. In a corner case, hikari connection leak detector reported the 
> following
> {noformat}
> 2021-08-26 09:19:18,102 WARN  com.zaxxer.hikari.pool.ProxyLeakTask: 
> [HikariPool-4 housekeeper]: Connection leak detection triggered for 
> org.postgresql.jdbc.PgConnection@77f76747, stack trace follows
> java.lang.Exception: Apparent connection leak detected
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.getDbConn(TxnHandler.java:3843)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.acquireLock(TxnHandler.java:5135)
>  
> at 
> org.apache.hadoop.hive.ql.txn.compactor.Initiator.run(Initiator.java:107) 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-26706) Add datalake to Hive metadata

2022-11-07 Thread KIRTI RUGE (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-26706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17629745#comment-17629745
 ] 

KIRTI RUGE commented on HIVE-26706:
---

Can I know the tried out version?

> Add datalake to Hive metadata
> -
>
> Key: HIVE-26706
> URL: https://issues.apache.org/jira/browse/HIVE-26706
> Project: Hive
>  Issue Type: New Feature
>  Components: Database/Schema, Metastore
>Reporter: heng.zhao
>Priority: Major
>
> 0: jdbc:hive2://localhost:1> show tables;
> +---+---+
> | tab_name  | type           |
> +---+---+
> | test1     | warehouse      |
> | test2     | iceberg           |
> +---+---+
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-26085) "getTableObjectByName method should ignore it" exception doesn't include cause

2022-11-07 Thread KIRTI RUGE (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-26085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17629655#comment-17629655
 ] 

KIRTI RUGE commented on HIVE-26085:
---

Can I take this?

> "getTableObjectByName method should ignore it" exception doesn't include cause
> --
>
> Key: HIVE-26085
> URL: https://issues.apache.org/jira/browse/HIVE-26085
> Project: Hive
>  Issue Type: Improvement
>  Components: Parser
> Environment: HDP 3.1.5
>Reporter: Wataru Yukawa
>Priority: Major
>
> current logic doesn't contain cause
> https://github.com/apache/hive/blob/a6e93633dc15aba179fb6ad422be4cbc88adf071/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java#L12208
> but
> {code}
> throw new SemanticException("Got exception though getTableObjectByName method 
> should ignore it", e)
> {code}
> seems better for troubleshooting.
> We encounter this issue when user access hive view where user doesn't hive 
> permission in original hive table.
> {code}
> create view aaa_view
> as
> select ... from aaa
> {code}
> "getTableObjectByName" exception happens when user try to access "aaa_view" 
> but doesn't have permission "aaa" table with apache ranger.
> {code}
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.parse.SemanticException:Got exception though 
> getTableObjectByName method should ignore it
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.walkASTMarkTABREF(SemanticAnalyzer.java:12020)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.rewriteASTWithMaskAndFilter(SemanticAnalyzer.java:12139)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.replaceViewReferenceWithDefinition(SemanticAnalyzer.java:2608)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2192)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2103)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2257)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2088)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:12234)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12328)
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:367)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:290)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:664)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1870)
> at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1817)
> at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1812)
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:197)
> ... 26 common frames omitted
> {code}
> In this case, we can't see error log like "permission denied" when user try 
> to access "aaa_view".
> So, it would be nice to add cause



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-26706) Add datalake to Hive metadata

2022-11-07 Thread KIRTI RUGE (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-26706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17629656#comment-17629656
 ] 

KIRTI RUGE commented on HIVE-26706:
---

Can I take this?

> Add datalake to Hive metadata
> -
>
> Key: HIVE-26706
> URL: https://issues.apache.org/jira/browse/HIVE-26706
> Project: Hive
>  Issue Type: New Feature
>  Components: Database/Schema, Metastore
>Reporter: heng.zhao
>Priority: Major
>
> 0: jdbc:hive2://localhost:1> show tables;
> +---+---+
> | tab_name  | type           |
> +---+---+
> | test1     | warehouse      |
> | test2     | iceberg           |
> +---+---+
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26706) Add datalake to Hive metadata

2022-11-06 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-26706:
-

Assignee: (was: KIRTI RUGE)

> Add datalake to Hive metadata
> -
>
> Key: HIVE-26706
> URL: https://issues.apache.org/jira/browse/HIVE-26706
> Project: Hive
>  Issue Type: New Feature
>  Components: Database/Schema, Metastore
>Reporter: heng.zhao
>Priority: Major
>
> 0: jdbc:hive2://localhost:1> show tables;
> +---+---+
> | tab_name  | type           |
> +---+---+
> | test1     | warehouse      |
> | test2     | iceberg           |
> +---+---+
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26085) "getTableObjectByName method should ignore it" exception doesn't include cause

2022-11-06 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-26085:
-

Assignee: (was: KIRTI RUGE)

> "getTableObjectByName method should ignore it" exception doesn't include cause
> --
>
> Key: HIVE-26085
> URL: https://issues.apache.org/jira/browse/HIVE-26085
> Project: Hive
>  Issue Type: Improvement
>  Components: Parser
> Environment: HDP 3.1.5
>Reporter: Wataru Yukawa
>Priority: Major
>
> current logic doesn't contain cause
> https://github.com/apache/hive/blob/a6e93633dc15aba179fb6ad422be4cbc88adf071/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java#L12208
> but
> {code}
> throw new SemanticException("Got exception though getTableObjectByName method 
> should ignore it", e)
> {code}
> seems better for troubleshooting.
> We encounter this issue when user access hive view where user doesn't hive 
> permission in original hive table.
> {code}
> create view aaa_view
> as
> select ... from aaa
> {code}
> "getTableObjectByName" exception happens when user try to access "aaa_view" 
> but doesn't have permission "aaa" table with apache ranger.
> {code}
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.parse.SemanticException:Got exception though 
> getTableObjectByName method should ignore it
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.walkASTMarkTABREF(SemanticAnalyzer.java:12020)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.rewriteASTWithMaskAndFilter(SemanticAnalyzer.java:12139)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.replaceViewReferenceWithDefinition(SemanticAnalyzer.java:2608)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2192)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2103)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2257)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2088)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:12234)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12328)
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:367)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:290)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:664)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1870)
> at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1817)
> at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1812)
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:197)
> ... 26 common frames omitted
> {code}
> In this case, we can't see error log like "permission denied" when user try 
> to access "aaa_view".
> So, it would be nice to add cause



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-25483) TxnHandler::acquireLock should close the DB conn to avoid connection leaks

2022-11-06 Thread KIRTI RUGE (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17629600#comment-17629600
 ] 

KIRTI RUGE commented on HIVE-25483:
---

This has been taken care already.

TxnHandler.acquireLock() closes all DB connections .

> TxnHandler::acquireLock should close the DB conn to avoid connection leaks
> --
>
> Key: HIVE-25483
> URL: https://issues.apache.org/jira/browse/HIVE-25483
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: KIRTI RUGE
>Priority: Major
>
> TxnHandler::acquireLock should close DB connection on exiting the function. 
> [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L5688]
> [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L5726]
> [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L5737-L5740]
>  If there are any exceptions downstream, this connection isn't closed 
> cleanly. In a corner case, hikari connection leak detector reported the 
> following
> {noformat}
> 2021-08-26 09:19:18,102 WARN  com.zaxxer.hikari.pool.ProxyLeakTask: 
> [HikariPool-4 housekeeper]: Connection leak detection triggered for 
> org.postgresql.jdbc.PgConnection@77f76747, stack trace follows
> java.lang.Exception: Apparent connection leak detected
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.getDbConn(TxnHandler.java:3843)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.acquireLock(TxnHandler.java:5135)
>  
> at 
> org.apache.hadoop.hive.ql.txn.compactor.Initiator.run(Initiator.java:107) 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26706) Add datalake to Hive metadata

2022-11-04 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-26706:
-

Assignee: KIRTI RUGE

> Add datalake to Hive metadata
> -
>
> Key: HIVE-26706
> URL: https://issues.apache.org/jira/browse/HIVE-26706
> Project: Hive
>  Issue Type: New Feature
>  Components: Database/Schema, Metastore
>Reporter: heng.zhao
>Assignee: KIRTI RUGE
>Priority: Major
>
> 0: jdbc:hive2://localhost:1> show tables;
> +---+---+
> | tab_name  | type           |
> +---+---+
> | test1     | warehouse      |
> | test2     | iceberg           |
> +---+---+
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-25483) TxnHandler::acquireLock should close the DB conn to avoid connection leaks

2022-11-03 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-25483:
-

Assignee: KIRTI RUGE

> TxnHandler::acquireLock should close the DB conn to avoid connection leaks
> --
>
> Key: HIVE-25483
> URL: https://issues.apache.org/jira/browse/HIVE-25483
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: KIRTI RUGE
>Priority: Major
>
> TxnHandler::acquireLock should close DB connection on exiting the function. 
> [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L5688]
> [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L5726]
> [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java#L5737-L5740]
>  If there are any exceptions downstream, this connection isn't closed 
> cleanly. In a corner case, hikari connection leak detector reported the 
> following
> {noformat}
> 2021-08-26 09:19:18,102 WARN  com.zaxxer.hikari.pool.ProxyLeakTask: 
> [HikariPool-4 housekeeper]: Connection leak detection triggered for 
> org.postgresql.jdbc.PgConnection@77f76747, stack trace follows
> java.lang.Exception: Apparent connection leak detected
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.getDbConn(TxnHandler.java:3843)
> at 
> org.apache.hadoop.hive.metastore.txn.TxnHandler.acquireLock(TxnHandler.java:5135)
>  
> at 
> org.apache.hadoop.hive.ql.txn.compactor.Initiator.run(Initiator.java:107) 
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26085) "getTableObjectByName method should ignore it" exception doesn't include cause

2022-11-03 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-26085:
-

Assignee: KIRTI RUGE

> "getTableObjectByName method should ignore it" exception doesn't include cause
> --
>
> Key: HIVE-26085
> URL: https://issues.apache.org/jira/browse/HIVE-26085
> Project: Hive
>  Issue Type: Improvement
>  Components: Parser
> Environment: HDP 3.1.5
>Reporter: Wataru Yukawa
>Assignee: KIRTI RUGE
>Priority: Major
>
> current logic doesn't contain cause
> https://github.com/apache/hive/blob/a6e93633dc15aba179fb6ad422be4cbc88adf071/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java#L12208
> but
> {code}
> throw new SemanticException("Got exception though getTableObjectByName method 
> should ignore it", e)
> {code}
> seems better for troubleshooting.
> We encounter this issue when user access hive view where user doesn't hive 
> permission in original hive table.
> {code}
> create view aaa_view
> as
> select ... from aaa
> {code}
> "getTableObjectByName" exception happens when user try to access "aaa_view" 
> but doesn't have permission "aaa" table with apache ranger.
> {code}
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.parse.SemanticException:Got exception though 
> getTableObjectByName method should ignore it
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.walkASTMarkTABREF(SemanticAnalyzer.java:12020)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.rewriteASTWithMaskAndFilter(SemanticAnalyzer.java:12139)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.replaceViewReferenceWithDefinition(SemanticAnalyzer.java:2608)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2192)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2103)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2257)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.getMetaData(SemanticAnalyzer.java:2088)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genResolvedParseTree(SemanticAnalyzer.java:12234)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12328)
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:367)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:290)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:664)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1870)
> at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1817)
> at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1812)
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
> at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:197)
> ... 26 common frames omitted
> {code}
> In this case, we can't see error log like "permission denied" when user try 
> to access "aaa_view".
> So, it would be nice to add cause



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-21154) Investigate using object IDs in Acid HMS schema instead of names

2022-10-28 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-21154:
-

Assignee: KIRTI RUGE

> Investigate using object IDs in Acid HMS schema instead of names
> 
>
> Key: HIVE-21154
> URL: https://issues.apache.org/jira/browse/HIVE-21154
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: KIRTI RUGE
>Priority: Major
>
> Currently all Acid related tables in HMS DB (HIVE_LOCKS, TXN_COMPONENTS, etc) 
> use db_name/table_name/partition_name to identify the metastore object that 
> is being tracked (these are potentially long strings, esp partition name.  It 
> would improve perf to use object ID such as TBLS.TBL_ID which is exposed in 
> Thrift since HIVE-20556.  It would also make handling object rename 
> operations no-op (currently handled in {{TxnHandler.onRename()}} from 
> {{AcidEventListener extends MetaStoreEventListener}}).  This would require 
> significant HMS schema changes and surfacing the ID of Database/Partition 
> objects.
> Need to think how this affects replication.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26666) Filter out compactions by id to minimise expense of db operations

2022-10-23 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-2?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-2:
-

Assignee: KIRTI RUGE

> Filter out compactions by id to minimise  expense of db operations
> --
>
> Key: HIVE-2
> URL: https://issues.apache.org/jira/browse/HIVE-2
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>
> At present we use below operations while filtering out compactions in classes 
> like
> AlterTableCompactOperation
> cleaner
> Use show compaction filter option provided after 
> https://issues.apache.org/jira/browse/HIVE-13353



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26580) SHOW COMPACTIONS should support ordering and limiting functionality in filtering options

2022-09-30 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-26580:
-

Assignee: KIRTI RUGE

> SHOW COMPACTIONS should support ordering and limiting functionality in 
> filtering options
> 
>
> Key: HIVE-26580
> URL: https://issues.apache.org/jira/browse/HIVE-26580
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
> Fix For: 4.0.0
>
>
> SHOW COMPACTION should provide ordering by defied table . It should also 
> support limitation of fetched records



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26563) Add extra columns in Show Compactions output and sort the output

2022-09-26 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-26563:
-

Assignee: KIRTI RUGE

> Add extra columns in Show Compactions output and sort the output
> 
>
> Key: HIVE-26563
> URL: https://issues.apache.org/jira/browse/HIVE-26563
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>
> SHOW COMPACTIONS need reformatting in below aspects:
> 1.Need to add all below columns     
>   host information, duration, next_txn_id, txn_id, commit_time, 
> highest_write_id, cleaner       start, tbl_properties
> 2. Sort the output in a way it should display a moist recent element at the 
> start(either completed or in progress)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HIVE-26481) Cleaner fails with FileNotFoundException

2022-09-21 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE resolved HIVE-26481.
---
Resolution: Fixed

This has been merged to master. Thanks [Denys 
Kuzmenko|https://github.com/deniskuzZ] [and |https://github.com/deniskuzZ] 
[~ayushtkn] for review.  

> Cleaner fails with FileNotFoundException
> 
>
> Key: HIVE-26481
> URL: https://issues.apache.org/jira/browse/HIVE-26481
> Project: Hive
>  Issue Type: Bug
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> The compaction fails when the Cleaner tried to remove a missing directory 
> from HDFS.
> {code:java}
> 2022-08-05 18:56:38,873 INFO org.apache.hadoop.hive.ql.txn.compactor.Cleaner: 
> [Cleaner-executor-thread-0]: Starting cleaning for 
> id:30,dbname:default,tableName:test_concur_compaction_minor,partName:null,state:�,type:MINOR,enqueueTime:0,start:0,properties:null,runAs:hive,tooManyAborts:false,hasOldAbort:false,highestWriteId:4,errorMessage:null,workerId:
>  null,initiatorId: null 2022-08-05 18:56:38,888 ERROR 
> org.apache.hadoop.hive.ql.txn.compactor.Cleaner: [Cleaner-executor-thread-0]: 
> Caught exception when cleaning, unable to complete cleaning of 
> id:30,dbname:default,tableName:test_concur_compaction_minor,partName:null,state:�,type:MINOR,enqueueTime:0,start:0,properties:null,runAs:hive,tooManyAborts:false,hasOldAbort:false,highestWriteId:4,errorMessage:null,workerId:
>  null,initiatorId: null java.io.FileNotFoundException: File 
> hdfs://ns1/warehouse/tablespace/managed/hive/test_concur_compaction_minor/.hive-staging_hive_2022-08-05_18-56-37_115_5049319600695911622-37
>  does not exist. at 
> org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1275)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1249)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1194)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1190)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listLocatedStatus(DistributedFileSystem.java:1208)
>  at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2144) 
> at org.apache.hadoop.fs.FileSystem$5.handleFileStat(FileSystem.java:2332) at 
> org.apache.hadoop.fs.FileSystem$5.hasNext(FileSystem.java:2309) at 
> org.apache.hadoop.hive.ql.io.AcidUtils.getHdfsDirSnapshots(AcidUtils.java:1440)
>  at 
> org.apache.hadoop.hive.ql.txn.compactor.Cleaner.removeFiles(Cleaner.java:287) 
> at org.apache.hadoop.hive.ql.txn.compactor.Cleaner.clean(Cleaner.java:214) at 
> org.apache.hadoop.hive.ql.txn.compactor.Cleaner.lambda$run$0(Cleaner.java:114)
>  at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorUtil$ThrowingRunnable.lambda$unchecked$0(CompactorUtil.java:54)
>  at 
> java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1736)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  at java.base/java.lang.Thread.run(Thread.java:834){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-26481) Cleaner fails with FileNotFoundException

2022-09-21 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-26481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-26481:
-

Assignee: KIRTI RUGE

> Cleaner fails with FileNotFoundException
> 
>
> Key: HIVE-26481
> URL: https://issues.apache.org/jira/browse/HIVE-26481
> Project: Hive
>  Issue Type: Bug
>Reporter: KIRTI RUGE
>Assignee: KIRTI RUGE
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> The compaction fails when the Cleaner tried to remove a missing directory 
> from HDFS.
> {code:java}
> 2022-08-05 18:56:38,873 INFO org.apache.hadoop.hive.ql.txn.compactor.Cleaner: 
> [Cleaner-executor-thread-0]: Starting cleaning for 
> id:30,dbname:default,tableName:test_concur_compaction_minor,partName:null,state:�,type:MINOR,enqueueTime:0,start:0,properties:null,runAs:hive,tooManyAborts:false,hasOldAbort:false,highestWriteId:4,errorMessage:null,workerId:
>  null,initiatorId: null 2022-08-05 18:56:38,888 ERROR 
> org.apache.hadoop.hive.ql.txn.compactor.Cleaner: [Cleaner-executor-thread-0]: 
> Caught exception when cleaning, unable to complete cleaning of 
> id:30,dbname:default,tableName:test_concur_compaction_minor,partName:null,state:�,type:MINOR,enqueueTime:0,start:0,properties:null,runAs:hive,tooManyAborts:false,hasOldAbort:false,highestWriteId:4,errorMessage:null,workerId:
>  null,initiatorId: null java.io.FileNotFoundException: File 
> hdfs://ns1/warehouse/tablespace/managed/hive/test_concur_compaction_minor/.hive-staging_hive_2022-08-05_18-56-37_115_5049319600695911622-37
>  does not exist. at 
> org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1275)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1249)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1194)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1190)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listLocatedStatus(DistributedFileSystem.java:1208)
>  at org.apache.hadoop.fs.FileSystem.listLocatedStatus(FileSystem.java:2144) 
> at org.apache.hadoop.fs.FileSystem$5.handleFileStat(FileSystem.java:2332) at 
> org.apache.hadoop.fs.FileSystem$5.hasNext(FileSystem.java:2309) at 
> org.apache.hadoop.hive.ql.io.AcidUtils.getHdfsDirSnapshots(AcidUtils.java:1440)
>  at 
> org.apache.hadoop.hive.ql.txn.compactor.Cleaner.removeFiles(Cleaner.java:287) 
> at org.apache.hadoop.hive.ql.txn.compactor.Cleaner.clean(Cleaner.java:214) at 
> org.apache.hadoop.hive.ql.txn.compactor.Cleaner.lambda$run$0(Cleaner.java:114)
>  at 
> org.apache.hadoop.hive.ql.txn.compactor.CompactorUtil$ThrowingRunnable.lambda$unchecked$0(CompactorUtil.java:54)
>  at 
> java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1736)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  at java.base/java.lang.Thread.run(Thread.java:834){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (HIVE-13353) SHOW COMPACTIONS should support filtering options

2022-09-18 Thread KIRTI RUGE (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-13353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KIRTI RUGE reassigned HIVE-13353:
-

Assignee: KIRTI RUGE

> SHOW COMPACTIONS should support filtering options
> -
>
> Key: HIVE-13353
> URL: https://issues.apache.org/jira/browse/HIVE-13353
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 1.3.0, 2.0.0
>Reporter: Eugene Koifman
>Assignee: KIRTI RUGE
>Priority: Major
> Attachments: HIVE-13353.01.patch
>
>
> Since we now have historical information in SHOW COMPACTIONS the output can 
> easily become unwieldy. (e.g. 1000 partitions with 3 lines of history each)
> this is a significant usability issue
> Need to add ability to filter by db/table/partition
> Perhaps would also be useful to filter by status



--
This message was sent by Atlassian Jira
(v8.20.10#820010)