[jira] [Resolved] (HIVE-25672) Hive isn't purging older compaction entries from show compaction command

2022-08-30 Thread Rohan Nimmagadda (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohan Nimmagadda resolved HIVE-25672.
-
Resolution: Fixed

> Hive isn't purging older compaction entries from show compaction command
> 
>
> Key: HIVE-25672
> URL: https://issues.apache.org/jira/browse/HIVE-25672
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore, Transactions
>Affects Versions: 3.1.1
>Reporter: Rohan Nimmagadda
>Priority: Minor
>
> Added below properties in hive-site, but it's not enforced to auto purging.
> When we run show compaction command it takes forever and returns billions of 
> rows.
> Result of show compactions command :
> {code:java}
> 752,450 rows selected (198.066 seconds) 
> {code}
> {code:java}
> hive.compactor.history.retention.succeeded": "10",
> "hive.compactor.history.retention.failed": "10",  
> "hive.compactor.history.retention.attempted": "10",  
> "hive.compactor.history.reaper.interval": "10m" {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-25672) Hive isn't purging older compaction entries from show compaction command

2022-08-30 Thread Rohan Nimmagadda (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17598020#comment-17598020
 ] 

Rohan Nimmagadda commented on HIVE-25672:
-

We have more than 2M completed transactions in HIVE, with default HIVE 
properties the backend DB (Postgres) is not able to handle the query to delete 
in bigger chunks it failed out with the below expectation

 
{code:java}
An I/O error occurred while sending to the backend. (SQLState=08006, 
ErrorCode=0)
2022-08-25T15:06:00,256 ERROR [pool-6-thread-6]: 
txn.AcidCompactionHistoryService (AcidCompactionHistoryService.java:run(64)) - 
Serious error in pool-6-thread-6
org.apache.hadoop.hive.metastore.api.MetaException: Unable to connect to 
transaction database org.postgresql.util.PSQLException: An I/O error occurred 
while sending to the backend.


purgeCompactionHistory() : An I/O error occurred while sending to the backend 
{code}
So we applied the HIVE-25659 to HIVE 3.1 Version and added the below configs to 
delete older completed txn's 

The below configurations should be documented 
 # hive.direct.sql.max.parameters=1 (Any one instance of HMS)
 # hive.metastore.housekeeping.threads.on=true 
 # hive.metastore.task.threads.remote=true (Any one instance of HMS)
 # hive.compactor.history.retention.succeeded=1
 # hive.compactor.history.retention.failed=3

 

> Hive isn't purging older compaction entries from show compaction command
> 
>
> Key: HIVE-25672
> URL: https://issues.apache.org/jira/browse/HIVE-25672
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore, Transactions
>Affects Versions: 3.1.1
>Reporter: Rohan Nimmagadda
>Priority: Minor
>
> Added below properties in hive-site, but it's not enforced to auto purging.
> When we run show compaction command it takes forever and returns billions of 
> rows.
> Result of show compactions command :
> {code:java}
> 752,450 rows selected (198.066 seconds) 
> {code}
> {code:java}
> hive.compactor.history.retention.succeeded": "10",
> "hive.compactor.history.retention.failed": "10",  
> "hive.compactor.history.retention.attempted": "10",  
> "hive.compactor.history.reaper.interval": "10m" {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (HIVE-25868) AcidHouseKeeperService fails to purgeCompactionHistory if the entries in COMPLETED_COMPACTIONS tables

2022-08-30 Thread Rohan Nimmagadda (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17598018#comment-17598018
 ] 

Rohan Nimmagadda edited comment on HIVE-25868 at 8/30/22 6:59 PM:
--

We have more than 2M completed transactions in HIVE, with default HIVE 
properties the backend DB (Postgres) is not able to handle the query to delete 
in bigger chunks it failed out with the below expectation

 
{code:java}
An I/O error occurred while sending to the backend. (SQLState=08006, 
ErrorCode=0)
2022-08-25T15:06:00,256 ERROR [pool-6-thread-6]: 
txn.AcidCompactionHistoryService (AcidCompactionHistoryService.java:run(64)) - 
Serious error in pool-6-thread-6
org.apache.hadoop.hive.metastore.api.MetaException: Unable to connect to 
transaction database org.postgresql.util.PSQLException: An I/O error occurred 
while sending to the backend.


purgeCompactionHistory() : An I/O error occurred while sending to the backend 
{code}
So we applied the HIVE-25659 to HIVE 3.1 Version and added the below configs to 
delete older completed txn's 

The below configurations should be documented 
 # hive.direct.sql.max.parameters=1 (Any one instance of HMS)
 # hive.metastore.housekeeping.threads.on=true 
 # hive.metastore.task.threads.remote=true (Any one instance of HMS)
 # hive.compactor.history.retention.succeeded=1
 # hive.compactor.history.retention.failed=3

 


was (Author: rohannimmagadda):
We have more than 2M completed transactions in HIVE, with default HIVE 
properties the backend DB (Postgres) is not able to handle the query to delete 
in bigger chunks it failed out with the below expectation

 
{code:java}
An I/O error occurred while sending to the backend. (SQLState=08006, 
ErrorCode=0)
2022-08-25T15:06:00,256 ERROR [pool-6-thread-6]: 
txn.AcidCompactionHistoryService (AcidCompactionHistoryService.java:run(64)) - 
Serious error in pool-6-thread-6
org.apache.hadoop.hive.metastore.api.MetaException: Unable to connect to 
transaction database org.postgresql.util.PSQLException: An I/O error occurred 
while sending to the backend.


purgeCompactionHistory() : An I/O error occurred while sending to the backend 
{code}
So we applied the HIVE-25659 to HIVE 3.1 Version and added the below configs to 
delete older completed txn's 

The below configurations should be documented 
 # hive.direct.sql.max.parameters=1 (Any one instance of HMS)
 # hive.metastore.housekeeping.threads.on=true 
 # hive.metastore.task.threads.remote=true (Any one instance of HMS)
 # hive.compactor.history.retention.succeeded=1
 # hive.compactor.history.retention.failed=3

 

 

 

 

> AcidHouseKeeperService fails to purgeCompactionHistory if the entries in 
> COMPLETED_COMPACTIONS tables 
> --
>
> Key: HIVE-25868
> URL: https://issues.apache.org/jira/browse/HIVE-25868
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore, Standalone Metastore
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>
> To purge the entries, prepared statement is created. If the number of entries 
> in the prepared statement goes beyond the limit of backend db (for postgres 
> it around 32k), then the operation fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (HIVE-25868) AcidHouseKeeperService fails to purgeCompactionHistory if the entries in COMPLETED_COMPACTIONS tables

2022-08-30 Thread Rohan Nimmagadda (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17598018#comment-17598018
 ] 

Rohan Nimmagadda edited comment on HIVE-25868 at 8/30/22 6:58 PM:
--

We have more than 2M completed transactions in HIVE, with default HIVE 
properties the backend DB (Postgres) is not able to handle the query to delete 
in bigger chunks it failed out with the below expectation

 
{code:java}
An I/O error occurred while sending to the backend. (SQLState=08006, 
ErrorCode=0)
2022-08-25T15:06:00,256 ERROR [pool-6-thread-6]: 
txn.AcidCompactionHistoryService (AcidCompactionHistoryService.java:run(64)) - 
Serious error in pool-6-thread-6
org.apache.hadoop.hive.metastore.api.MetaException: Unable to connect to 
transaction database org.postgresql.util.PSQLException: An I/O error occurred 
while sending to the backend.


purgeCompactionHistory() : An I/O error occurred while sending to the backend 
{code}
So we applied the HIVE-25659 to HIVE 3.1 Version and added the below configs to 
delete older completed txn's 

The below configurations should be documented 
 # hive.direct.sql.max.parameters=1
 # hive.metastore.housekeeping.threads.on=true
 # hive.metastore.task.threads.remote=true
 # hive.compactor.history.retention.succeeded=1
 # hive.compactor.history.retention.failed=3

 

 

 


was (Author: rohannimmagadda):
We have more than 2M completed transactions in HIVE, with default HIVE 
properties the backend DB (Postgres) is not able to handle the query to delete 
in bigger chunks it failed out with the below expectation

 
{code:java}
An I/O error occurred while sending to the backend. (SQLState=08006, 
ErrorCode=0)
2022-08-25T15:06:00,256 ERROR [pool-6-thread-6]: 
txn.AcidCompactionHistoryService (AcidCompactionHistoryService.java:run(64)) - 
Serious error in pool-6-thread-6
org.apache.hadoop.hive.metastore.api.MetaException: Unable to connect to 
transaction database org.postgresql.util.PSQLException: An I/O error occurred 
while sending to the backend.


purgeCompactionHistory() : An I/O error occurred while sending to the backend 
{code}
So we applied the [HIVE-25659|https://issues.apache.org/jira/browse/HIVE-25659] 
to HIVE 3.1 Version and added the below configs to delete older completed txn's 

The below configurations should be documented 
 # hive.direct.sql.max.parameters=1
 # hive.metastore.housekeeping.threads.on=true
 # hive.metastore.task.threads.remote=true

 # hive.compactor.history.retention.succeeded=1

 #  

hive.compactor.history.retention.failed=3

 

 

> AcidHouseKeeperService fails to purgeCompactionHistory if the entries in 
> COMPLETED_COMPACTIONS tables 
> --
>
> Key: HIVE-25868
> URL: https://issues.apache.org/jira/browse/HIVE-25868
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore, Standalone Metastore
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>
> To purge the entries, prepared statement is created. If the number of entries 
> in the prepared statement goes beyond the limit of backend db (for postgres 
> it around 32k), then the operation fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (HIVE-25868) AcidHouseKeeperService fails to purgeCompactionHistory if the entries in COMPLETED_COMPACTIONS tables

2022-08-30 Thread Rohan Nimmagadda (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17598018#comment-17598018
 ] 

Rohan Nimmagadda edited comment on HIVE-25868 at 8/30/22 6:58 PM:
--

We have more than 2M completed transactions in HIVE, with default HIVE 
properties the backend DB (Postgres) is not able to handle the query to delete 
in bigger chunks it failed out with the below expectation

 
{code:java}
An I/O error occurred while sending to the backend. (SQLState=08006, 
ErrorCode=0)
2022-08-25T15:06:00,256 ERROR [pool-6-thread-6]: 
txn.AcidCompactionHistoryService (AcidCompactionHistoryService.java:run(64)) - 
Serious error in pool-6-thread-6
org.apache.hadoop.hive.metastore.api.MetaException: Unable to connect to 
transaction database org.postgresql.util.PSQLException: An I/O error occurred 
while sending to the backend.


purgeCompactionHistory() : An I/O error occurred while sending to the backend 
{code}
So we applied the HIVE-25659 to HIVE 3.1 Version and added the below configs to 
delete older completed txn's 

The below configurations should be documented 
 # hive.direct.sql.max.parameters=1 (Any one instance of HMS)
 # hive.metastore.housekeeping.threads.on=true 
 # hive.metastore.task.threads.remote=true (Any one instance of HMS)
 # hive.compactor.history.retention.succeeded=1
 # hive.compactor.history.retention.failed=3

 

 

 

 


was (Author: rohannimmagadda):
We have more than 2M completed transactions in HIVE, with default HIVE 
properties the backend DB (Postgres) is not able to handle the query to delete 
in bigger chunks it failed out with the below expectation

 
{code:java}
An I/O error occurred while sending to the backend. (SQLState=08006, 
ErrorCode=0)
2022-08-25T15:06:00,256 ERROR [pool-6-thread-6]: 
txn.AcidCompactionHistoryService (AcidCompactionHistoryService.java:run(64)) - 
Serious error in pool-6-thread-6
org.apache.hadoop.hive.metastore.api.MetaException: Unable to connect to 
transaction database org.postgresql.util.PSQLException: An I/O error occurred 
while sending to the backend.


purgeCompactionHistory() : An I/O error occurred while sending to the backend 
{code}
So we applied the HIVE-25659 to HIVE 3.1 Version and added the below configs to 
delete older completed txn's 

The below configurations should be documented 
 # hive.direct.sql.max.parameters=1
 # hive.metastore.housekeeping.threads.on=true
 # hive.metastore.task.threads.remote=true
 # hive.compactor.history.retention.succeeded=1
 # hive.compactor.history.retention.failed=3

 

 

 

> AcidHouseKeeperService fails to purgeCompactionHistory if the entries in 
> COMPLETED_COMPACTIONS tables 
> --
>
> Key: HIVE-25868
> URL: https://issues.apache.org/jira/browse/HIVE-25868
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore, Standalone Metastore
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>
> To purge the entries, prepared statement is created. If the number of entries 
> in the prepared statement goes beyond the limit of backend db (for postgres 
> it around 32k), then the operation fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-25868) AcidHouseKeeperService fails to purgeCompactionHistory if the entries in COMPLETED_COMPACTIONS tables

2022-08-30 Thread Rohan Nimmagadda (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17598018#comment-17598018
 ] 

Rohan Nimmagadda commented on HIVE-25868:
-

We have more than 2M completed transactions in HIVE, with default HIVE 
properties the backend DB (Postgres) is not able to handle the query to delete 
in bigger chunks it failed out with the below expectation

 
{code:java}
An I/O error occurred while sending to the backend. (SQLState=08006, 
ErrorCode=0)
2022-08-25T15:06:00,256 ERROR [pool-6-thread-6]: 
txn.AcidCompactionHistoryService (AcidCompactionHistoryService.java:run(64)) - 
Serious error in pool-6-thread-6
org.apache.hadoop.hive.metastore.api.MetaException: Unable to connect to 
transaction database org.postgresql.util.PSQLException: An I/O error occurred 
while sending to the backend.


purgeCompactionHistory() : An I/O error occurred while sending to the backend 
{code}
So we applied the [HIVE-25659|https://issues.apache.org/jira/browse/HIVE-25659] 
to HIVE 3.1 Version and added the below configs to delete older completed txn's 

The below configurations should be documented 
 # hive.direct.sql.max.parameters=1
 # hive.metastore.housekeeping.threads.on=true
 # hive.metastore.task.threads.remote=true

 # hive.compactor.history.retention.succeeded=1

 #  

hive.compactor.history.retention.failed=3

 

 

> AcidHouseKeeperService fails to purgeCompactionHistory if the entries in 
> COMPLETED_COMPACTIONS tables 
> --
>
> Key: HIVE-25868
> URL: https://issues.apache.org/jira/browse/HIVE-25868
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore, Standalone Metastore
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>
> To purge the entries, prepared statement is created. If the number of entries 
> in the prepared statement goes beyond the limit of backend db (for postgres 
> it around 32k), then the operation fails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HIVE-25672) Hive isn't purging older compaction entries from show compaction command

2021-11-04 Thread Rohan Nimmagadda (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohan Nimmagadda updated HIVE-25672:

Description: 
Added below properties in hive-site, but it's not enforced to auto purging.

When we run show compaction command it takes forever and returns billions of 
rows.

Result of show compaction command :
{code:java}
752,450 rows selected (198.066 seconds) 

{code}
{code:java}

hive.compactor.history.retention.succeeded": "10",
"hive.compactor.history.retention.failed": "10",  
"hive.compactor.history.retention.attempted": "10",  
"hive.compactor.history.reaper.interval": "10m" {code}

  was:
Added below properties in hive-site , but its not enforcing to auto purging.

When we run show compaction command it takes forever and returns billions of 
rows.

Result of show compaction command :
{code:java}
752,450 rows selected (198.066 seconds) {code}
{code:java}
hive.compactor.history.retention.succeeded": "10",
"hive.compactor.history.retention.failed": "10",  
"hive.compactor.history.retention.attempted": "10",  
"hive.compactor.history.reaper.interval": "10m" {code}


> Hive isn't purging older compaction entries from show compaction command
> 
>
> Key: HIVE-25672
> URL: https://issues.apache.org/jira/browse/HIVE-25672
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore, Transactions
>Affects Versions: 3.1.1
>Reporter: Rohan Nimmagadda
>Priority: Minor
>
> Added below properties in hive-site, but it's not enforced to auto purging.
> When we run show compaction command it takes forever and returns billions of 
> rows.
> Result of show compaction command :
> {code:java}
> 752,450 rows selected (198.066 seconds) 
> {code}
> {code:java}
> hive.compactor.history.retention.succeeded": "10",
> "hive.compactor.history.retention.failed": "10",  
> "hive.compactor.history.retention.attempted": "10",  
> "hive.compactor.history.reaper.interval": "10m" {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-25672) Hive isn't purging older compaction entries from show compaction command

2021-11-04 Thread Rohan Nimmagadda (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohan Nimmagadda updated HIVE-25672:

Description: 
Added below properties in hive-site , but its not enforcing to auto purging.

When we run show compaction command it takes forever and returns billions of 
rows.

Result of show compaction command :
{code:java}
752,450 rows selected (198.066 seconds) {code}
{code:java}
hive.compactor.history.retention.succeeded": "10",
"hive.compactor.history.retention.failed": "10",  
"hive.compactor.history.retention.attempted": "10",  
"hive.compactor.history.reaper.interval": "10m" {code}

  was:
Added below properties in hive-site , but its not enforcing to auto purging.

When we run show compaction command it takes forever and returns billions of 
rows.
{code:java}
hive.compactor.history.retention.succeeded": "10",
"hive.compactor.history.retention.failed": "10",  
"hive.compactor.history.retention.attempted": "10",  
"hive.compactor.history.reaper.interval": "10m" {code}


> Hive isn't purging older compaction entries from show compaction command
> 
>
> Key: HIVE-25672
> URL: https://issues.apache.org/jira/browse/HIVE-25672
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, Metastore, Transactions
>Affects Versions: 3.1.1
>Reporter: Rohan Nimmagadda
>Priority: Minor
>
> Added below properties in hive-site , but its not enforcing to auto purging.
> When we run show compaction command it takes forever and returns billions of 
> rows.
> Result of show compaction command :
> {code:java}
> 752,450 rows selected (198.066 seconds) {code}
> {code:java}
> hive.compactor.history.retention.succeeded": "10",
> "hive.compactor.history.retention.failed": "10",  
> "hive.compactor.history.retention.attempted": "10",  
> "hive.compactor.history.reaper.interval": "10m" {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)