[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=691584=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-691584
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 07/Dec/21 09:17
Start Date: 07/Dec/21 09:17
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #2825:
URL: https://github.com/apache/hive/pull/2825#discussion_r763787981



##
File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCompactor.java
##
@@ -1621,6 +1621,7 @@ public void mmTableOpenWriteId() throws Exception {
 verifyFooBarResult(tblName, 2);
 verifyHasBase(table.getSd(), fs, "base_005_v016");
 runCleaner(conf);
+runCleaner(conf);

Review comment:
   added




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 691584)
Time Spent: 4h 20m  (was: 4h 10m)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=691576=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-691576
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 07/Dec/21 09:04
Start Date: 07/Dec/21 09:04
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #2825:
URL: https://github.com/apache/hive/pull/2825#discussion_r763777180



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java
##
@@ -323,57 +323,62 @@ public void markCompacted(CompactionInfo info) throws 
MetaException {
   @Override
   @RetrySemantics.ReadOnly
   public List findReadyToClean(long minOpenTxnWaterMark, long 
retentionTime) throws MetaException {
-Connection dbConn = null;
-List rc = new ArrayList<>();
-
-Statement stmt = null;
-ResultSet rs = null;
 try {
-  try {
-dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
-stmt = dbConn.createStatement();
+  List rc = new ArrayList<>();
+  
+  try (Connection dbConn = 
getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+   Statement stmt = dbConn.createStatement()) {
 /*
  * By filtering on minOpenTxnWaterMark, we will only cleanup after 
every transaction is committed, that could see
  * the uncompacted deltas. This way the cleaner can clean up 
everything that was made obsolete by this compaction.
  */
-String s = "SELECT \"CQ_ID\", \"CQ_DATABASE\", \"CQ_TABLE\", 
\"CQ_PARTITION\", "
-+ "\"CQ_TYPE\", \"CQ_RUN_AS\", \"CQ_HIGHEST_WRITE_ID\" FROM 
\"COMPACTION_QUEUE\" WHERE \"CQ_STATE\" = '"
-+ READY_FOR_CLEANING + "'";
+String whereClause = " WHERE \"CQ_STATE\" = '" + READY_FOR_CLEANING + 
"'";
 if (minOpenTxnWaterMark > 0) {
-  s = s + " AND (\"CQ_NEXT_TXN_ID\" <= " + minOpenTxnWaterMark + " OR 
\"CQ_NEXT_TXN_ID\" IS NULL)";
+  whereClause += " AND (\"CQ_NEXT_TXN_ID\" <= " + minOpenTxnWaterMark 
+ " OR \"CQ_NEXT_TXN_ID\" IS NULL)";
 }
 if (retentionTime > 0) {
-  s = s + " AND \"CQ_COMMIT_TIME\" < (" + getEpochFn(dbProduct) + " - 
" + retentionTime + ")";
+  whereClause += " AND \"CQ_COMMIT_TIME\" < (" + getEpochFn(dbProduct) 
+ " - " + retentionTime + ")";
 }
-s = s + " ORDER BY \"CQ_HIGHEST_WRITE_ID\", \"CQ_ID\"";
+String s = "SELECT \"CQ_ID\", \"cq1\".\"CQ_DATABASE\", 
\"cq1\".\"CQ_TABLE\", \"cq1\".\"CQ_PARTITION\"," +
+  "   \"CQ_TYPE\", \"CQ_RUN_AS\", \"CQ_HIGHEST_WRITE_ID\", 
\"CQ_TBLPROPERTIES\"" +
+  "  FROM \"COMPACTION_QUEUE\" \"cq1\" " +
+  "INNER JOIN (" +
+  "  SELECT MIN(\"CQ_HIGHEST_WRITE_ID\") \"WRITE_ID\", 
\"CQ_DATABASE\", \"CQ_TABLE\", \"CQ_PARTITION\"" +
+  "  FROM \"COMPACTION_QUEUE\"" 
+  + whereClause + 
+  "  GROUP BY \"CQ_DATABASE\", \"CQ_TABLE\", \"CQ_PARTITION\") \"cq2\" 
" +
+  "ON \"cq1\".\"CQ_DATABASE\" = \"cq2\".\"CQ_DATABASE\""+
+  "  AND \"cq1\".\"CQ_TABLE\" = \"cq2\".\"CQ_TABLE\""+
+  "  AND (\"cq1\".\"CQ_PARTITION\" = \"cq2\".\"CQ_PARTITION\"" +
+  "OR \"cq1\".\"CQ_PARTITION\" IS NULL AND 
\"cq2\".\"CQ_PARTITION\" IS NULL)"
+  + whereClause + 
+  "  AND \"CQ_HIGHEST_WRITE_ID\" = \"WRITE_ID\"" +
+  "  ORDER BY \"CQ_ID\"";
 LOG.debug("Going to execute query <" + s + ">");
-rs = stmt.executeQuery(s);
 
-while (rs.next()) {
-  CompactionInfo info = new CompactionInfo();
-  info.id = rs.getLong(1);
-  info.dbname = rs.getString(2);
-  info.tableName = rs.getString(3);
-  info.partName = rs.getString(4);
-  info.type = dbCompactionType2ThriftType(rs.getString(5).charAt(0));
-  info.runAs = rs.getString(6);
-  info.highestWriteId = rs.getLong(7);
-  if (LOG.isDebugEnabled()) {
-LOG.debug("Found ready to clean: " + info.toString());
+try (ResultSet rs = stmt.executeQuery(s)) {
+  while (rs.next()) {
+CompactionInfo info = new CompactionInfo();
+info.id = rs.getLong(1);
+info.dbname = rs.getString(2);
+info.tableName = rs.getString(3);
+info.partName = rs.getString(4);
+info.type = dbCompactionType2ThriftType(rs.getString(5).charAt(0));
+info.runAs = rs.getString(6);
+info.highestWriteId = rs.getLong(7);
+if (LOG.isDebugEnabled()) {
+  LOG.debug("Found ready to clean: " + info.toString());
+}
+rc.add(info);
   }
-  rc.add(info);
 }
 return rc;
   } catch (SQLException e) {
 LOG.error("Unable to select next element for 

[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=691575=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-691575
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 07/Dec/21 09:04
Start Date: 07/Dec/21 09:04
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #2825:
URL: https://github.com/apache/hive/pull/2825#discussion_r763776755



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java
##
@@ -323,57 +323,62 @@ public void markCompacted(CompactionInfo info) throws 
MetaException {
   @Override
   @RetrySemantics.ReadOnly
   public List findReadyToClean(long minOpenTxnWaterMark, long 
retentionTime) throws MetaException {
-Connection dbConn = null;
-List rc = new ArrayList<>();
-
-Statement stmt = null;
-ResultSet rs = null;
 try {
-  try {
-dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
-stmt = dbConn.createStatement();
+  List rc = new ArrayList<>();
+  
+  try (Connection dbConn = 
getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+   Statement stmt = dbConn.createStatement()) {
 /*
  * By filtering on minOpenTxnWaterMark, we will only cleanup after 
every transaction is committed, that could see
  * the uncompacted deltas. This way the cleaner can clean up 
everything that was made obsolete by this compaction.
  */
-String s = "SELECT \"CQ_ID\", \"CQ_DATABASE\", \"CQ_TABLE\", 
\"CQ_PARTITION\", "
-+ "\"CQ_TYPE\", \"CQ_RUN_AS\", \"CQ_HIGHEST_WRITE_ID\" FROM 
\"COMPACTION_QUEUE\" WHERE \"CQ_STATE\" = '"
-+ READY_FOR_CLEANING + "'";
+String whereClause = " WHERE \"CQ_STATE\" = '" + READY_FOR_CLEANING + 
"'";
 if (minOpenTxnWaterMark > 0) {
-  s = s + " AND (\"CQ_NEXT_TXN_ID\" <= " + minOpenTxnWaterMark + " OR 
\"CQ_NEXT_TXN_ID\" IS NULL)";
+  whereClause += " AND (\"CQ_NEXT_TXN_ID\" <= " + minOpenTxnWaterMark 
+ " OR \"CQ_NEXT_TXN_ID\" IS NULL)";
 }
 if (retentionTime > 0) {
-  s = s + " AND \"CQ_COMMIT_TIME\" < (" + getEpochFn(dbProduct) + " - 
" + retentionTime + ")";
+  whereClause += " AND \"CQ_COMMIT_TIME\" < (" + getEpochFn(dbProduct) 
+ " - " + retentionTime + ")";
 }
-s = s + " ORDER BY \"CQ_HIGHEST_WRITE_ID\", \"CQ_ID\"";
+String s = "SELECT \"CQ_ID\", \"cq1\".\"CQ_DATABASE\", 
\"cq1\".\"CQ_TABLE\", \"cq1\".\"CQ_PARTITION\"," +
+  "   \"CQ_TYPE\", \"CQ_RUN_AS\", \"CQ_HIGHEST_WRITE_ID\", 
\"CQ_TBLPROPERTIES\"" +
+  "  FROM \"COMPACTION_QUEUE\" \"cq1\" " +
+  "INNER JOIN (" +

Review comment:
   because WRITE_ID is not unique, we can have same write_id allocated for 
dif combinations of tables where it's not the latest




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 691575)
Time Spent: 4h  (was: 3h 50m)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-12-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=691177=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-691177
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 06/Dec/21 16:26
Start Date: 06/Dec/21 16:26
Worklog Time Spent: 10m 
  Work Description: klcopp commented on a change in pull request #2825:
URL: https://github.com/apache/hive/pull/2825#discussion_r763165482



##
File path: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/txn/compactor/TestCompactor.java
##
@@ -1621,6 +1621,7 @@ public void mmTableOpenWriteId() throws Exception {
 verifyFooBarResult(tblName, 2);
 verifyHasBase(table.getSd(), fs, "base_005_v016");
 runCleaner(conf);
+runCleaner(conf);

Review comment:
   Wherever the double cleaner runs, I think you should add a comment 
explaining why

##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java
##
@@ -323,57 +323,62 @@ public void markCompacted(CompactionInfo info) throws 
MetaException {
   @Override
   @RetrySemantics.ReadOnly
   public List findReadyToClean(long minOpenTxnWaterMark, long 
retentionTime) throws MetaException {
-Connection dbConn = null;
-List rc = new ArrayList<>();
-
-Statement stmt = null;
-ResultSet rs = null;
 try {
-  try {
-dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
-stmt = dbConn.createStatement();
+  List rc = new ArrayList<>();
+  
+  try (Connection dbConn = 
getDbConn(Connection.TRANSACTION_READ_COMMITTED);
+   Statement stmt = dbConn.createStatement()) {
 /*
  * By filtering on minOpenTxnWaterMark, we will only cleanup after 
every transaction is committed, that could see
  * the uncompacted deltas. This way the cleaner can clean up 
everything that was made obsolete by this compaction.
  */
-String s = "SELECT \"CQ_ID\", \"CQ_DATABASE\", \"CQ_TABLE\", 
\"CQ_PARTITION\", "
-+ "\"CQ_TYPE\", \"CQ_RUN_AS\", \"CQ_HIGHEST_WRITE_ID\" FROM 
\"COMPACTION_QUEUE\" WHERE \"CQ_STATE\" = '"
-+ READY_FOR_CLEANING + "'";
+String whereClause = " WHERE \"CQ_STATE\" = '" + READY_FOR_CLEANING + 
"'";
 if (minOpenTxnWaterMark > 0) {
-  s = s + " AND (\"CQ_NEXT_TXN_ID\" <= " + minOpenTxnWaterMark + " OR 
\"CQ_NEXT_TXN_ID\" IS NULL)";
+  whereClause += " AND (\"CQ_NEXT_TXN_ID\" <= " + minOpenTxnWaterMark 
+ " OR \"CQ_NEXT_TXN_ID\" IS NULL)";
 }
 if (retentionTime > 0) {
-  s = s + " AND \"CQ_COMMIT_TIME\" < (" + getEpochFn(dbProduct) + " - 
" + retentionTime + ")";
+  whereClause += " AND \"CQ_COMMIT_TIME\" < (" + getEpochFn(dbProduct) 
+ " - " + retentionTime + ")";
 }
-s = s + " ORDER BY \"CQ_HIGHEST_WRITE_ID\", \"CQ_ID\"";
+String s = "SELECT \"CQ_ID\", \"cq1\".\"CQ_DATABASE\", 
\"cq1\".\"CQ_TABLE\", \"cq1\".\"CQ_PARTITION\"," +
+  "   \"CQ_TYPE\", \"CQ_RUN_AS\", \"CQ_HIGHEST_WRITE_ID\", 
\"CQ_TBLPROPERTIES\"" +
+  "  FROM \"COMPACTION_QUEUE\" \"cq1\" " +
+  "INNER JOIN (" +
+  "  SELECT MIN(\"CQ_HIGHEST_WRITE_ID\") \"WRITE_ID\", 
\"CQ_DATABASE\", \"CQ_TABLE\", \"CQ_PARTITION\"" +
+  "  FROM \"COMPACTION_QUEUE\"" 
+  + whereClause + 
+  "  GROUP BY \"CQ_DATABASE\", \"CQ_TABLE\", \"CQ_PARTITION\") \"cq2\" 
" +
+  "ON \"cq1\".\"CQ_DATABASE\" = \"cq2\".\"CQ_DATABASE\""+
+  "  AND \"cq1\".\"CQ_TABLE\" = \"cq2\".\"CQ_TABLE\""+
+  "  AND (\"cq1\".\"CQ_PARTITION\" = \"cq2\".\"CQ_PARTITION\"" +
+  "OR \"cq1\".\"CQ_PARTITION\" IS NULL AND 
\"cq2\".\"CQ_PARTITION\" IS NULL)"
+  + whereClause + 
+  "  AND \"CQ_HIGHEST_WRITE_ID\" = \"WRITE_ID\"" +
+  "  ORDER BY \"CQ_ID\"";
 LOG.debug("Going to execute query <" + s + ">");
-rs = stmt.executeQuery(s);
 
-while (rs.next()) {
-  CompactionInfo info = new CompactionInfo();
-  info.id = rs.getLong(1);
-  info.dbname = rs.getString(2);
-  info.tableName = rs.getString(3);
-  info.partName = rs.getString(4);
-  info.type = dbCompactionType2ThriftType(rs.getString(5).charAt(0));
-  info.runAs = rs.getString(6);
-  info.highestWriteId = rs.getLong(7);
-  if (LOG.isDebugEnabled()) {
-LOG.debug("Found ready to clean: " + info.toString());
+try (ResultSet rs = stmt.executeQuery(s)) {
+  while (rs.next()) {
+CompactionInfo info = new CompactionInfo();
+info.id = rs.getLong(1);
+info.dbname = rs.getString(2);
+info.tableName = rs.getString(3);
+info.partName = rs.getString(4);
+info.type = 

[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-11-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=687595=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-687595
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 29/Nov/21 20:22
Start Date: 29/Nov/21 20:22
Worklog Time Spent: 10m 
  Work Description: deniskuzZ opened a new pull request #2825:
URL: https://github.com/apache/hive/pull/2825


   
   
   ### What changes were proposed in this pull request?
   
   
   
   ### Why are the changes needed?
   
   
   
   ### Does this PR introduce _any_ user-facing change?
   
   
   
   ### How was this patch tested?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 687595)
Time Spent: 3h 40m  (was: 3.5h)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-11-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=685124=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-685124
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 23/Nov/21 08:35
Start Date: 23/Nov/21 08:35
Worklog Time Spent: 10m 
  Work Description: deniskuzZ merged pull request #2764:
URL: https://github.com/apache/hive/pull/2764


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 685124)
Time Spent: 3.5h  (was: 3h 20m)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-08-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=632189=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-632189
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 02/Aug/21 07:43
Start Date: 02/Aug/21 07:43
Worklog Time Spent: 10m 
  Work Description: deniskuzZ merged pull request #2277:
URL: https://github.com/apache/hive/pull/2277


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 632189)
Time Spent: 3h 20m  (was: 3h 10m)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-07-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=628265=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-628265
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 27/Jul/21 08:45
Start Date: 27/Jul/21 08:45
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #2277:
URL: https://github.com/apache/hive/pull/2277#discussion_r677247101



##
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
 assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
 ValidReaderWriteIdList validWriteIdList =
 
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-if (delayedCleanupEnabled) {
-  /*
-   * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-   * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-   * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-   */
-  validWriteIdList = 
validWriteIdList.updateHighWatermark(ci.highestWriteId);
-}
+/*
+ * We need to filter the obsoletes dir list, to only remove directories 
that were made obsolete by this compaction
+ * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
+ * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
+ */
+validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);

Review comment:
   sure, that could be done




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 628265)
Time Spent: 3h 10m  (was: 3h)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=627596=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627596
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 09:15
Start Date: 26/Jul/21 09:15
Worklog Time Spent: 10m 
  Work Description: klcopp commented on a change in pull request #2277:
URL: https://github.com/apache/hive/pull/2277#discussion_r676430685



##
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
 assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
 ValidReaderWriteIdList validWriteIdList =
 
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-if (delayedCleanupEnabled) {
-  /*
-   * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-   * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-   * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-   */
-  validWriteIdList = 
validWriteIdList.updateHighWatermark(ci.highestWriteId);
-}
+/*
+ * We need to filter the obsoletes dir list, to only remove directories 
that were made obsolete by this compaction
+ * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
+ * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
+ */
+validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);

Review comment:
   And we're 100% sure that we're lowering it and not raising it? Maybe we 
could include some sort of assertion that ci.highestWriteId <= previous high 
watermark?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 627596)
Time Spent: 3h  (was: 2h 50m)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-07-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=625976=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-625976
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 21/Jul/21 08:18
Start Date: 21/Jul/21 08:18
Worklog Time Spent: 10m 
  Work Description: klcopp closed pull request #2274:
URL: https://github.com/apache/hive/pull/2274


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 625976)
Time Spent: 2h 50m  (was: 2h 40m)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-07-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=625850=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-625850
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 21/Jul/21 00:08
Start Date: 21/Jul/21 00:08
Worklog Time Spent: 10m 
  Work Description: github-actions[bot] commented on pull request #2274:
URL: https://github.com/apache/hive/pull/2274#issuecomment-883785153


   This pull request has been automatically marked as stale because it has not 
had recent activity. It will be closed if no further activity occurs.
   Feel free to reach out on the d...@hive.apache.org list if the patch is in 
need of reviews.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 625850)
Time Spent: 2h 40m  (was: 2.5h)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-07-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=625521=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-625521
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 20/Jul/21 12:21
Start Date: 20/Jul/21 12:21
Worklog Time Spent: 10m 
  Work Description: klcopp commented on a change in pull request #2277:
URL: https://github.com/apache/hive/pull/2277#discussion_r672892362



##
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
 assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
 ValidReaderWriteIdList validWriteIdList =
 
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-if (delayedCleanupEnabled) {
-  /*
-   * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-   * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-   * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-   */
-  validWriteIdList = 
validWriteIdList.updateHighWatermark(ci.highestWriteId);
-}
+/*
+ * We need to filter the obsoletes dir list, to only remove directories 
that were made obsolete by this compaction
+ * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
+ * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
+ */
+validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);

Review comment:
   How do we know that ci.highestWriteId's txn <= the min open txn the 
cleaner uses, if MIN_HISTORY_LEVEL is still used?

##
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
 assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
 ValidReaderWriteIdList validWriteIdList =
 
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-if (delayedCleanupEnabled) {
-  /*
-   * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-   * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-   * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-   */
-  validWriteIdList = 
validWriteIdList.updateHighWatermark(ci.highestWriteId);
-}
+/*
+ * We need to filter the obsoletes dir list, to only remove directories 
that were made obsolete by this compaction
+ * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
+ * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
+ */
+validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);

Review comment:
   Yes, ci.highestWriteId = the highest write id that was compacted.
   So if we have this after compaction:
   delta_1_1
   delta_2_2
   delta_3_3
   base_3
   ci.highestWriteId=3, so the cleaner will remove (assuming MIN_HISTORY_LEVEL 
is still being used) : 
   delta_1_1
   delta_2_2
   delta_3_3
   But how do we know those can be removed?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 625521)
Time Spent: 2.5h  (was: 2h 20m)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  

[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-07-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=625506=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-625506
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 20/Jul/21 12:19
Start Date: 20/Jul/21 12:19
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #2277:
URL: https://github.com/apache/hive/pull/2277#discussion_r672900959



##
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
 assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
 ValidReaderWriteIdList validWriteIdList =
 
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-if (delayedCleanupEnabled) {
-  /*
-   * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-   * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-   * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-   */
-  validWriteIdList = 
validWriteIdList.updateHighWatermark(ci.highestWriteId);
-}
+/*
+ * We need to filter the obsoletes dir list, to only remove directories 
that were made obsolete by this compaction
+ * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
+ * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
+ */
+validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);

Review comment:
   not sure I got the question. but highestWriteId is recorded at the time 
when the compaction txn starts, so it records all open txns that have to be 
ignored.

##
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
 assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
 ValidReaderWriteIdList validWriteIdList =
 
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-if (delayedCleanupEnabled) {
-  /*
-   * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-   * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-   * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-   */
-  validWriteIdList = 
validWriteIdList.updateHighWatermark(ci.highestWriteId);
-}
+/*
+ * We need to filter the obsoletes dir list, to only remove directories 
that were made obsolete by this compaction
+ * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
+ * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
+ */
+validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);

Review comment:
   not sure I got the question. but highestWriteId is recorded at the time 
when the compaction txn starts, so it records writeid hwm and all open txns 
below it that have to be ignored.

##
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
 assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
 ValidReaderWriteIdList validWriteIdList =
 
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-if (delayedCleanupEnabled) {
-  /*
-   * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-   * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-   * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-   */
-  validWriteIdList = 

[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-07-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=625135=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-625135
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 20/Jul/21 10:10
Start Date: 20/Jul/21 10:10
Worklog Time Spent: 10m 
  Work Description: klcopp commented on a change in pull request #2277:
URL: https://github.com/apache/hive/pull/2277#discussion_r672892362



##
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
 assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
 ValidReaderWriteIdList validWriteIdList =
 
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-if (delayedCleanupEnabled) {
-  /*
-   * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-   * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-   * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-   */
-  validWriteIdList = 
validWriteIdList.updateHighWatermark(ci.highestWriteId);
-}
+/*
+ * We need to filter the obsoletes dir list, to only remove directories 
that were made obsolete by this compaction
+ * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
+ * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
+ */
+validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);

Review comment:
   How do we know that ci.highestWriteId's txn <= the min open txn the 
cleaner uses, if MIN_HISTORY_LEVEL is still used?

##
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
 assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
 ValidReaderWriteIdList validWriteIdList =
 
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-if (delayedCleanupEnabled) {
-  /*
-   * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-   * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-   * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-   */
-  validWriteIdList = 
validWriteIdList.updateHighWatermark(ci.highestWriteId);
-}
+/*
+ * We need to filter the obsoletes dir list, to only remove directories 
that were made obsolete by this compaction
+ * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
+ * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
+ */
+validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);

Review comment:
   Yes, ci.highestWriteId = the highest write id that was compacted.
   So if we have this after compaction:
   delta_1_1
   delta_2_2
   delta_3_3
   base_3
   ci.highestWriteId=3, so the cleaner will remove (assuming MIN_HISTORY_LEVEL 
is still being used) : 
   delta_1_1
   delta_2_2
   delta_3_3
   But how do we know those can be removed?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 625135)
Time Spent: 2h 10m  (was: 2h)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time 

[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-07-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=625120=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-625120
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 20/Jul/21 10:08
Start Date: 20/Jul/21 10:08
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #2277:
URL: https://github.com/apache/hive/pull/2277#discussion_r672900959



##
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
 assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
 ValidReaderWriteIdList validWriteIdList =
 
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-if (delayedCleanupEnabled) {
-  /*
-   * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-   * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-   * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-   */
-  validWriteIdList = 
validWriteIdList.updateHighWatermark(ci.highestWriteId);
-}
+/*
+ * We need to filter the obsoletes dir list, to only remove directories 
that were made obsolete by this compaction
+ * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
+ * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
+ */
+validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);

Review comment:
   not sure I got the question. but highestWriteId is recorded at the time 
when the compaction txn starts, so it records all open txns that have to be 
ignored.

##
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
 assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
 ValidReaderWriteIdList validWriteIdList =
 
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-if (delayedCleanupEnabled) {
-  /*
-   * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-   * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-   * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-   */
-  validWriteIdList = 
validWriteIdList.updateHighWatermark(ci.highestWriteId);
-}
+/*
+ * We need to filter the obsoletes dir list, to only remove directories 
that were made obsolete by this compaction
+ * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
+ * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
+ */
+validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);

Review comment:
   not sure I got the question. but highestWriteId is recorded at the time 
when the compaction txn starts, so it records writeid hwm and all open txns 
below it that have to be ignored.

##
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
 assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
 ValidReaderWriteIdList validWriteIdList =
 
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-if (delayedCleanupEnabled) {
-  /*
-   * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-   * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-   * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-   */
-  validWriteIdList = 

[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-07-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=624810=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-624810
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 20/Jul/21 09:27
Start Date: 20/Jul/21 09:27
Worklog Time Spent: 10m 
  Work Description: klcopp commented on a change in pull request #2277:
URL: https://github.com/apache/hive/pull/2277#discussion_r672959780



##
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
 assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
 ValidReaderWriteIdList validWriteIdList =
 
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-if (delayedCleanupEnabled) {
-  /*
-   * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-   * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-   * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-   */
-  validWriteIdList = 
validWriteIdList.updateHighWatermark(ci.highestWriteId);
-}
+/*
+ * We need to filter the obsoletes dir list, to only remove directories 
that were made obsolete by this compaction
+ * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
+ * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
+ */
+validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);

Review comment:
   Yes, ci.highestWriteId = the highest write id that was compacted.
   So if we have this after compaction:
   delta_1_1
   delta_2_2
   delta_3_3
   base_3
   ci.highestWriteId=3, so the cleaner will remove (assuming MIN_HISTORY_LEVEL 
is still being used) : 
   delta_1_1
   delta_2_2
   delta_3_3
   But how do we know those can be removed?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 624810)
Time Spent: 1h 50m  (was: 1h 40m)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-07-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=624792=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-624792
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 20/Jul/21 08:09
Start Date: 20/Jul/21 08:09
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #2277:
URL: https://github.com/apache/hive/pull/2277#discussion_r672900959



##
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
 assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
 ValidReaderWriteIdList validWriteIdList =
 
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-if (delayedCleanupEnabled) {
-  /*
-   * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-   * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-   * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-   */
-  validWriteIdList = 
validWriteIdList.updateHighWatermark(ci.highestWriteId);
-}
+/*
+ * We need to filter the obsoletes dir list, to only remove directories 
that were made obsolete by this compaction
+ * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
+ * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
+ */
+validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);

Review comment:
   not sure I got the question. but highestWriteId is recorded at the time 
when the compaction txn starts, so it records writeid hwm and all open txns 
below it that have to be ignored.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 624792)
Time Spent: 1h 40m  (was: 1.5h)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-07-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=624791=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-624791
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 20/Jul/21 08:06
Start Date: 20/Jul/21 08:06
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #2277:
URL: https://github.com/apache/hive/pull/2277#discussion_r672900959



##
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
 assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
 ValidReaderWriteIdList validWriteIdList =
 
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-if (delayedCleanupEnabled) {
-  /*
-   * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-   * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-   * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-   */
-  validWriteIdList = 
validWriteIdList.updateHighWatermark(ci.highestWriteId);
-}
+/*
+ * We need to filter the obsoletes dir list, to only remove directories 
that were made obsolete by this compaction
+ * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
+ * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
+ */
+validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);

Review comment:
   not sure I got the question. but highestWriteId is recorded at the time 
when the compaction txn starts, so it records all open txns that have to be 
ignored.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 624791)
Time Spent: 1.5h  (was: 1h 20m)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-07-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=624788=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-624788
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 20/Jul/21 07:54
Start Date: 20/Jul/21 07:54
Worklog Time Spent: 10m 
  Work Description: klcopp commented on a change in pull request #2277:
URL: https://github.com/apache/hive/pull/2277#discussion_r672892362



##
File path: ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/Cleaner.java
##
@@ -282,15 +282,12 @@ private ValidReaderWriteIdList 
getValidCleanerWriteIdList(CompactionInfo ci, Tab
 assert rsp != null && rsp.getTblValidWriteIdsSize() == 1;
 ValidReaderWriteIdList validWriteIdList =
 
TxnCommonUtils.createValidReaderWriteIdList(rsp.getTblValidWriteIds().get(0));
-boolean delayedCleanupEnabled = 
conf.getBoolVar(HIVE_COMPACTOR_DELAYED_CLEANUP_ENABLED);
-if (delayedCleanupEnabled) {
-  /*
-   * If delayed cleanup enabled, we need to filter the obsoletes dir list, 
to only remove directories that were made obsolete by this compaction
-   * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
-   * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
-   */
-  validWriteIdList = 
validWriteIdList.updateHighWatermark(ci.highestWriteId);
-}
+/*
+ * We need to filter the obsoletes dir list, to only remove directories 
that were made obsolete by this compaction
+ * If we have a higher retentionTime it is possible for a second 
compaction to run on the same partition. Cleaning up the first compaction
+ * should not touch the newer obsolete directories to not to violate the 
retentionTime for those.
+ */
+validWriteIdList = validWriteIdList.updateHighWatermark(ci.highestWriteId);

Review comment:
   How do we know that ci.highestWriteId's txn <= the min open txn the 
cleaner uses, if MIN_HISTORY_LEVEL is still used?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 624788)
Time Spent: 1h 20m  (was: 1h 10m)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Denys Kuzmenko
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=600543=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600543
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 21/May/21 17:47
Start Date: 21/May/21 17:47
Worklog Time Spent: 10m 
  Work Description: belugabehr commented on a change in pull request #2274:
URL: https://github.com/apache/hive/pull/2274#discussion_r637098785



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnUtils.java
##
@@ -383,6 +384,37 @@ public static String getFullTableName(String dbName, 
String tableName) {
 return ret;
   }
 
+  /**
+   * Executes simple update queries based on output from 
TxnUtils#buildQueryWithINClauseStrings.
+   * Example: desired queries are "delete from x where y in (1, 2, 3)" and 
"delete from x where y in (4, 5, 6)"
+   *
+   * @param updateQueries  List of: "delete from x where y in (?,?,?)", 
"delete from x where y in (?,?,?)"
+   * @param updateQueryCount Number of queries to execute, in the example: 2
+   * @param ids to prepare statement with. In the example: List containing: 1, 
2, 3, 4, 5, 6
+   * @param dbConn database Connection
+   * @throws SQLException
+   */
+  public static int executeUpdateQueries(List updateQueries, 
List updateQueryCount, List ids,
+  Connection dbConn)
+  throws SQLException {
+int totalCount = 0;
+int updatedCount = 0;
+for (int i = 0; i < updateQueries.size(); i++) {
+  String query = updateQueries.get(i);
+  long insertCount = updateQueryCount.get(i);
+  LOG.debug("Going to execute update <" + query + ">");

Review comment:
   `LOG.debug("Going to ... <{}>", query);`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600543)
Time Spent: 1h  (was: 50m)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=600545=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600545
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 21/May/21 17:47
Start Date: 21/May/21 17:47
Worklog Time Spent: 10m 
  Work Description: belugabehr commented on a change in pull request #2274:
URL: https://github.com/apache/hive/pull/2274#discussion_r637098961



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnUtils.java
##
@@ -383,6 +384,37 @@ public static String getFullTableName(String dbName, 
String tableName) {
 return ret;
   }
 
+  /**
+   * Executes simple update queries based on output from 
TxnUtils#buildQueryWithINClauseStrings.
+   * Example: desired queries are "delete from x where y in (1, 2, 3)" and 
"delete from x where y in (4, 5, 6)"
+   *
+   * @param updateQueries  List of: "delete from x where y in (?,?,?)", 
"delete from x where y in (?,?,?)"
+   * @param updateQueryCount Number of queries to execute, in the example: 2
+   * @param ids to prepare statement with. In the example: List containing: 1, 
2, 3, 4, 5, 6
+   * @param dbConn database Connection
+   * @throws SQLException
+   */
+  public static int executeUpdateQueries(List updateQueries, 
List updateQueryCount, List ids,
+  Connection dbConn)
+  throws SQLException {
+int totalCount = 0;
+int updatedCount = 0;
+for (int i = 0; i < updateQueries.size(); i++) {
+  String query = updateQueries.get(i);
+  long insertCount = updateQueryCount.get(i);
+  LOG.debug("Going to execute update <" + query + ">");
+  PreparedStatement pStmt = dbConn.prepareStatement(query);
+  for (int j = 0; j < insertCount; j++) {
+pStmt.setLong(j + 1, ids.get(totalCount + j));
+  }
+  totalCount += insertCount;
+  int count = pStmt.executeUpdate();
+  LOG.debug("Updated " + count + " records");

Review comment:
   Use anchors.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600545)
Time Spent: 1h 10m  (was: 1h)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=600542=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600542
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 21/May/21 17:46
Start Date: 21/May/21 17:46
Worklog Time Spent: 10m 
  Work Description: belugabehr commented on a change in pull request #2274:
URL: https://github.com/apache/hive/pull/2274#discussion_r637098230



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java
##
@@ -365,7 +365,29 @@ public void markCleaned(CompactionInfo info) throws 
MetaException {
   ResultSet rs = null;
   try {
 dbConn = getDbConn(Connection.TRANSACTION_READ_COMMITTED);
-String s = "INSERT INTO \"COMPLETED_COMPACTIONS\"(\"CC_ID\", 
\"CC_DATABASE\", "
+
+// Get all of this partition's COMPACTION_QUEUE entries in "ready for 
cleaning" with smaller id.
+// TODO eventually change this to CQ_NEXT_TXN_ID (it might be null for 
some entires)
+String s = "SELECT \"CQ_ID\" FROM \"COMPACTION_QUEUE\" WHERE 
\"CQ_DATABASE\"=? AND \"CQ_TABLE\"=? ";
+if (info.partName != null) {
+  s += " AND \"CQ_PARTITION\" = ?";
+}
+s += " AND \"CQ_STATE\"='" + READY_FOR_CLEANING + "' AND \"CQ_ID\" <= 
" + info.id;
+pStmt = dbConn.prepareStatement(s);
+pStmt.setString(1, info.dbname);
+pStmt.setString(2, info.tableName);
+if (info.partName != null) {
+  pStmt.setString(3, info.partName);
+}
+LOG.debug("Going to execute query <" + s + "> for CQ_ID=" + info.id);

Review comment:
   `LOG.debug("Going to execute query <{}> for CQ_ID={}", s, info.id);`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600542)
Time Spent: 50m  (was: 40m)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=600541=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-600541
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 21/May/21 17:45
Start Date: 21/May/21 17:45
Worklog Time Spent: 10m 
  Work Description: belugabehr commented on a change in pull request #2274:
URL: https://github.com/apache/hive/pull/2274#discussion_r637097800



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java
##
@@ -305,16 +302,19 @@ public void markCompacted(CompactionInfo info) throws 
MetaException {
  * By filtering on minOpenTxnWaterMark, we will only cleanup after 
every transaction is committed, that could see
  * the uncompacted deltas. This way the cleaner can clean up 
everything that was made obsolete by this compaction.
  */
-String s = "SELECT \"CQ_ID\", \"CQ_DATABASE\", \"CQ_TABLE\", 
\"CQ_PARTITION\", "
-+ "\"CQ_TYPE\", \"CQ_RUN_AS\", \"CQ_HIGHEST_WRITE_ID\" FROM 
\"COMPACTION_QUEUE\" WHERE \"CQ_STATE\" = '"
-+ READY_FOR_CLEANING + "'";
-if (minOpenTxnWaterMark > 0) {
-  s = s + " AND (\"CQ_NEXT_TXN_ID\" <= " + minOpenTxnWaterMark + " OR 
\"CQ_NEXT_TXN_ID\" IS NULL)";
-}
+String s = "SELECT \"CQ_ID\", \"CQ_DATABASE\", \"CQ_TABLE\", 
\"CQ_PARTITION\", \"CQ_TYPE\", \"CQ_RUN_AS\"," +
+" \"CQ_HIGHEST_WRITE_ID\", \"CQ_NEXT_TXN_ID\" FROM 
\"COMPACTION_QUEUE\" WHERE \"CQ_ID\" IN (" +
+" SELECT DISTINCT \"CQ_ID\" FROM (" +
+"   SELECT MAX(\"CQ_NEXT_TXN_ID\") \"CQ_NEXT_TXN_ID\", 
MAX(\"CQ_ID\") \"CQ_ID\" " +
+" FROM \"COMPACTION_QUEUE\" WHERE \"CQ_STATE\" = '" + 
READY_FOR_CLEANING + "'" +
+" AND (\"CQ_NEXT_TXN_ID\" <= "+ minOpenTxnWaterMark + " OR 
\"CQ_NEXT_TXN_ID\" IS NULL)";
 if (retentionTime > 0) {
-  s = s + " AND \"CQ_COMMIT_TIME\" < (" + getEpochFn(dbProduct) + " - 
" + retentionTime + ")";
+  s += "  AND \"CQ_COMMIT_TIME\" < (" + getEpochFn(dbProduct) + " 
- " + retentionTime + ")";
+}
+s += "GROUP BY \"CQ_DATABASE\", \"CQ_TABLE\", \"CQ_PARTITION\" 
) \"X\" )";
+if (LOG.isDebugEnabled()) {
+  LOG.debug("Going to execute query <" + s + ">");

Review comment:
   `LOG.debug("Going to... <{}>", s);`




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 600541)
Time Spent: 40m  (was: 0.5h)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-05-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=596581=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-596581
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 14/May/21 09:54
Start Date: 14/May/21 09:54
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #2274:
URL: https://github.com/apache/hive/pull/2274#discussion_r632418906



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java
##
@@ -305,17 +302,21 @@ public void markCompacted(CompactionInfo info) throws 
MetaException {
  * By filtering on minOpenTxnWaterMark, we will only cleanup after 
every transaction is committed, that could see
  * the uncompacted deltas. This way the cleaner can clean up 
everything that was made obsolete by this compaction.
  */
-String s = "SELECT \"CQ_ID\", \"CQ_DATABASE\", \"CQ_TABLE\", 
\"CQ_PARTITION\", "
-+ "\"CQ_TYPE\", \"CQ_RUN_AS\", \"CQ_HIGHEST_WRITE_ID\" FROM 
\"COMPACTION_QUEUE\" WHERE \"CQ_STATE\" = '"
-+ READY_FOR_CLEANING + "'";
-if (minOpenTxnWaterMark > 0) {
-  s = s + " AND (\"CQ_NEXT_TXN_ID\" <= " + minOpenTxnWaterMark + " OR 
\"CQ_NEXT_TXN_ID\" IS NULL)";
-}
+StringBuilder sb = new StringBuilder();

Review comment:
   I think, with this approach, you might end up not cleaning anything at 
all as you'll be always looking at latest compaction id that might be blocked 
by incoming txns.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 596581)
Time Spent: 0.5h  (was: 20m)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-05-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=596563=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-596563
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 14/May/21 09:15
Start Date: 14/May/21 09:15
Worklog Time Spent: 10m 
  Work Description: deniskuzZ commented on a change in pull request #2274:
URL: https://github.com/apache/hive/pull/2274#discussion_r632397219



##
File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/CompactionTxnHandler.java
##
@@ -305,17 +302,21 @@ public void markCompacted(CompactionInfo info) throws 
MetaException {
  * By filtering on minOpenTxnWaterMark, we will only cleanup after 
every transaction is committed, that could see
  * the uncompacted deltas. This way the cleaner can clean up 
everything that was made obsolete by this compaction.
  */
-String s = "SELECT \"CQ_ID\", \"CQ_DATABASE\", \"CQ_TABLE\", 
\"CQ_PARTITION\", "
-+ "\"CQ_TYPE\", \"CQ_RUN_AS\", \"CQ_HIGHEST_WRITE_ID\" FROM 
\"COMPACTION_QUEUE\" WHERE \"CQ_STATE\" = '"
-+ READY_FOR_CLEANING + "'";
-if (minOpenTxnWaterMark > 0) {
-  s = s + " AND (\"CQ_NEXT_TXN_ID\" <= " + minOpenTxnWaterMark + " OR 
\"CQ_NEXT_TXN_ID\" IS NULL)";
-}
+StringBuilder sb = new StringBuilder();

Review comment:
   It's really hard to read the query with builder constructs.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 596563)
Time Spent: 20m  (was: 10m)

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25115) Compaction queue entries may accumulate in "ready for cleaning" state

2021-05-14 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25115?focusedWorklogId=596547=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-596547
 ]

ASF GitHub Bot logged work on HIVE-25115:
-

Author: ASF GitHub Bot
Created on: 14/May/21 08:23
Start Date: 14/May/21 08:23
Worklog Time Spent: 10m 
  Work Description: klcopp opened a new pull request #2274:
URL: https://github.com/apache/hive/pull/2274


   See HIVE-25115
   
   ### How was this patch tested?
   Unit test


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 596547)
Remaining Estimate: 0h
Time Spent: 10m

> Compaction queue entries may accumulate in "ready for cleaning" state
> -
>
> Key: HIVE-25115
> URL: https://issues.apache.org/jira/browse/HIVE-25115
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If the Cleaner does not delete any files, the compaction queue entry is 
> thrown back to the queue and remains in "ready for cleaning" state.
> Problem: If 2 compactions run on the same table and enter "ready for 
> cleaning" state at the same time, only one "cleaning" will remove obsolete 
> files, the other entry will remain in the queue in "ready for cleaning" state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)