[ 
https://issues.apache.org/jira/browse/HIVE-26107?focusedWorklogId=764148&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-764148
 ]

ASF GitHub Bot logged work on HIVE-26107:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 29/Apr/22 11:03
            Start Date: 29/Apr/22 11:03
    Worklog Time Spent: 10m 
      Work Description: deniskuzZ commented on code in PR #3172:
URL: https://github.com/apache/hive/pull/3172#discussion_r861702245


##########
ql/src/java/org/apache/hadoop/hive/ql/DriverUtils.java:
##########
@@ -46,26 +46,46 @@ private DriverUtils() {
     throw new UnsupportedOperationException("DriverUtils should not be 
instantiated!");
   }
 
-  public static void runOnDriver(HiveConf conf, String user, SessionState 
sessionState,
+  @FunctionalInterface
+  private interface DriverCreator {
+    Driver createDriver(QueryState qs);
+  }
+
+  public static void runOnDriver(HiveConf conf, SessionState sessionState,
       String query) throws HiveException {
-    runOnDriver(conf, user, sessionState, query, null, -1);
+    runOnDriver(conf, sessionState, query, null, -1);
   }
 
   /**
    * For Query Based compaction to run the query to generate the compacted 
data.
    */
-  public static void runOnDriver(HiveConf conf, String user,
+  public static void runOnDriver(HiveConf conf,
       SessionState sessionState, String query, ValidWriteIdList writeIds, long 
compactorTxnId)
       throws HiveException {
     if(writeIds != null && compactorTxnId < 0) {
       throw new 
IllegalArgumentException(JavaUtils.txnIdToString(compactorTxnId) +
           " is not valid. Context: " + query);
     }
+    runOnDriverInternal(query, conf, sessionState, (qs) -> new Driver(qs, 
writeIds, compactorTxnId));
+  }
+
+  /**
+   * For Query Based compaction to run the query to generate the compacted 
data.
+   */
+  public static void runOnDriver(HiveConf conf, SessionState sessionState, 
String query, long analyzeTableWriteId)
+      throws HiveException {
+    if(analyzeTableWriteId < 0) {
+      throw new 
IllegalArgumentException(JavaUtils.txnIdToString(analyzeTableWriteId) +
+          " is not valid. Context: " + query);
+    }
+    runOnDriverInternal(query, conf, sessionState, (qs) -> new Driver(qs, 
analyzeTableWriteId));
+  }
+
+  private static void runOnDriverInternal(String query, HiveConf conf, 
SessionState sessionState, DriverCreator creator) throws HiveException {
     SessionState.setCurrentSessionState(sessionState);
     boolean isOk = false;
     try {
-      QueryState qs = new 
QueryState.Builder().withHiveConf(conf).withGenerateNewQueryId(true).nonIsolated().build();
-      Driver driver = new Driver(qs, null, null, writeIds, compactorTxnId);
+      Driver driver = creator.createDriver(new 
QueryState.Builder().withHiveConf(conf).withGenerateNewQueryId(true).nonIsolated().build());

Review Comment:
   nit: could we format it , maybe move query state on a new line?





Issue Time Tracking
-------------------

    Worklog Id:     (was: 764148)
    Time Spent: 3h  (was: 2h 50m)

> Worker shouldn't inject duplicate entries in `ready for cleaning` state into 
> the compaction queue
> -------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-26107
>                 URL: https://issues.apache.org/jira/browse/HIVE-26107
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: László Végh
>            Assignee: László Végh
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 3h
>  Remaining Estimate: 0h
>
> How to reproduce:
> 1) create an acid table and load some data ;
> 2) manually trigger the compaction for the table several times;
> 4) inspect compaction_queue: There are multiple entries in 'ready for 
> cleaning' state for the same table.
>  
> Expected behavior: All compaction request after the first one should be 
> rejected until the table is changed again.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to