[jira] [Commented] (HIVE-25535) Control cleaning obsolete directories/files of a table via property

2023-03-09 Thread Denys Kuzmenko (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17698381#comment-17698381
 ] 

Denys Kuzmenko commented on HIVE-25535:
---

[~ashish-kumar-sharma], why are you calling `txnHandler.markCleaned(ci)` if 
isNoCleanUpSet? in case of aborts, we are going to remove the metadata that 
would make the aborted data considered valid.

> Control cleaning obsolete directories/files of a table via property
> ---
>
> Key: HIVE-25535
> URL: https://issues.apache.org/jira/browse/HIVE-25535
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0-alpha-1
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> *Use Case* - 
> When external tool like [SPARK_ACID |https://github.com/qubole/spark-acid]try 
> to access hive metastore directly instead of accessing LLAP or hs2 which 
> lacks the ability of take acquires locks on the metastore artefacts. Due to 
> which if any spark acid jobs starts and at the same time compaction happens 
> in hive with leads to exceptions like *FileNotFound* for delta directory 
> because at time of spark acid compilation phase delta files are present but 
> when execution start delta files are deleted by compactor. 
> Inorder to tackle problem like this I am proposing to add a config 
> "NO_CLEANUP" is table properties and partition properties which provide 
> higher control on table and partition compaction process. 
> We already have 
> "[HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED|https://github.com/apache/hive/blob/71583e322fe14a0cfcde639629b509b252b0ed2c/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java#L3243]";
>  which allow us to delay the deletion of "obsolete directories/files" but it 
> is applicable to all the table in metastore where this config will provide 
> table and partition level control.
> *Solution* - 
> Add "NO_CLEANUP" in the table properties enable/disable the table-level and 
> partition cleanup and prevent the cleaner process from automatically cleaning 
> obsolete directories/files.
> Example - 
> ALTER TABLE  SET TBLPROPERTIES('NO_CLEANUP'=FALSE/TRUE);



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HIVE-25535) Control cleaning obsolete directories/files of a table via property

2021-09-17 Thread Ashish Sharma (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17417017#comment-17417017
 ] 

Ashish Sharma commented on HIVE-25535:
--

[~dkuzmenko] I agree with you. Now compactor is running in a transaction so 
problem like FileNotFound will not come. This config is more intended to 
HDP-3.1 and lower version users. Where Lock-based Cleaner is still running. 
Backporting compactor running in transaction is not straight forwards as it 
required metastore schema change. 

> Control cleaning obsolete directories/files of a table via property
> ---
>
> Key: HIVE-25535
> URL: https://issues.apache.org/jira/browse/HIVE-25535
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *Use Case* - 
> When external tool like [SPARK_ACID |https://github.com/qubole/spark-acid]try 
> to access hive metastore directly instead of accessing LLAP or hs2 which 
> lacks the ability of take acquires locks on the metastore artefacts. Due to 
> which if any spark acid jobs starts and at the same time compaction happens 
> in hive with leads to exceptions like *FileNotFound* for delta directory 
> because at time of spark acid compilation phase delta files are present but 
> when execution start delta files are deleted by compactor. 
> Inorder to tackle problem like this I am proposing to add a config 
> "NO_CLEANUP" is table properties and partition properties which provide 
> higher control on table and partition compaction process. 
> We already have 
> "[HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED|https://github.com/apache/hive/blob/71583e322fe14a0cfcde639629b509b252b0ed2c/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java#L3243]";
>  which allow us to delay the deletion of "obsolete directories/files" but it 
> is applicable to all the table in metastore where this config will provide 
> table and partition level control.
> *Solution* - 
> Add "NO_CLEANUP" in the table properties enable/disable the table-level and 
> partition cleanup and prevent the cleaner process from automatically cleaning 
> obsolete directories/files.
> Example - 
> ALTER TABLE  SET TBLPROPERTIES('NO_CLEANUP'=FALSE/TRUE);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-25535) Control cleaning obsolete directories/files of a table via property

2021-09-17 Thread Denys Kuzmenko (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17416899#comment-17416899
 ] 

Denys Kuzmenko commented on HIVE-25535:
---

Lock-based Cleaner implementation was required when Compaction was not running 
in a transaction. That's not the case anymore, however, HDP-3.1 is still 
relying on the locks. 

> Control cleaning obsolete directories/files of a table via property
> ---
>
> Key: HIVE-25535
> URL: https://issues.apache.org/jira/browse/HIVE-25535
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *Use Case* - 
> When external tool like [SPARK_ACID |https://github.com/qubole/spark-acid]try 
> to access hive metastore directly instead of accessing LLAP or hs2 which 
> lacks the ability of take acquires locks on the metastore artefacts. Due to 
> which if any spark acid jobs starts and at the same time compaction happens 
> in hive with leads to exceptions like *FileNotFound* for delta directory 
> because at time of spark acid compilation phase delta files are present but 
> when execution start delta files are deleted by compactor. 
> Inorder to tackle problem like this I am proposing to add a config 
> "NO_CLEANUP" is table properties and partition properties which provide 
> higher control on table and partition compaction process. 
> We already have 
> "[HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED|https://github.com/apache/hive/blob/71583e322fe14a0cfcde639629b509b252b0ed2c/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java#L3243]";
>  which allow us to delay the deletion of "obsolete directories/files" but it 
> is applicable to all the table in metastore where this config will provide 
> table and partition level control.
> *Solution* - 
> Add "NO_CLEANUP" in the table properties enable/disable the table-level and 
> partition cleanup and prevent the cleaner process from automatically cleaning 
> obsolete directories/files.
> Example - 
> ALTER TABLE  SET TBLPROPERTIES('NO_CLEANUP'=FALSE/TRUE);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-25535) Control cleaning obsolete directories/files of a table via property

2021-09-17 Thread Ashish Sharma (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-25535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17416713#comment-17416713
 ] 

Ashish Sharma commented on HIVE-25535:
--

[~dkuzmenko] update the use case in description.

> Control cleaning obsolete directories/files of a table via property
> ---
>
> Key: HIVE-25535
> URL: https://issues.apache.org/jira/browse/HIVE-25535
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *Use Case* - 
> When external tool like [SPARK_ACID |https://github.com/qubole/spark-acid]try 
> to access hive metastore directly instead of accessing LLAP or hs2 which 
> lacks the ability of take acquires locks on the metastore artefacts. Due to 
> which if any spark acid jobs starts and at the same time compaction happens 
> in hive with leads to exceptions like *FileNotFound* for delta directory 
> because at time of spark acid compilation phase delta files are present but 
> when execution start delta files are deleted by compactor. 
> Inorder to tackle problem like this I am proposing to add a config 
> "NO_CLEANUP" is table properties and partition properties which provide 
> higher control on table and partition compaction process. 
> We already have 
> "[HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED|https://github.com/apache/hive/blob/71583e322fe14a0cfcde639629b509b252b0ed2c/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java#L3243]";
>  which allow us to delay the deletion of "obsolete directories/files" but it 
> is applicable to all the table in metastore where this config will provide 
> table and partition level control.
> *Solution* - 
> Add "NO_CLEANUP" in the table properties enable/disable the table-level and 
> partition cleanup and prevent the cleaner process from automatically cleaning 
> obsolete directories/files.
> Example - 
> ALTER TABLE  SET TBLPROPERTIES('NO_CLEANUP'=FALSE/TRUE);



--
This message was sent by Atlassian Jira
(v8.3.4#803005)