[ 
https://issues.apache.org/jira/browse/HIVE-13353?focusedWorklogId=819580&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-819580
 ]

ASF GitHub Bot logged work on HIVE-13353:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 24/Oct/22 10:17
            Start Date: 24/Oct/22 10:17
    Worklog Time Spent: 10m 
      Work Description: veghlaci05 commented on code in PR #3608:
URL: https://github.com/apache/hive/pull/3608#discussion_r1003113137


##########
ql/src/java/org/apache/hadoop/hive/ql/ddl/process/show/compactions/ShowCompactionsAnalyzer.java:
##########
@@ -40,20 +43,53 @@ public ShowCompactionsAnalyzer(QueryState queryState) 
throws SemanticException {
 
   @Override
   public void analyzeInternal(ASTNode root) throws SemanticException {
+    ctx.setResFile(ctx.getLocalTmpPath());
     String poolName = null;
-    Tree pool = root.getChild(0);
-    if (pool != null) {
-      if (pool.getType() != HiveParser.TOK_COMPACT_POOL) {
-        throw new SemanticException("Unknown token, 'POOL' expected.");
-      } else {
-        poolName = unescapeSQLString(pool.getChild(0).getText());
+    String dbName = null;
+    String tbName = null;
+    String compactionType = null;
+    String compactionStatus = null;
+    long compactionId = 0;
+    Map<String, String> partitionSpec = null;
+    if (root.getChildCount() > 6) {
+      throw new 
SemanticException(ErrorMsg.INVALID_AST_TREE.getMsg(root.toStringTree()));
+    }
+    if (root.getType() == HiveParser.TOK_SHOW_COMPACTIONS) {
+      for (int i = 0; i < root.getChildCount(); i++) {
+        ASTNode child = (ASTNode) root.getChild(i);
+        switch (child.getType()) {
+          case HiveParser.TOK_TABTYPE:
+            tbName = child.getChild(0).getText();
+            if (child.getChildCount() == 2) {
+              if (child.getChild(0).getChildCount() == 2) {
+                dbName = DDLUtils.getFQName((ASTNode) 
child.getChild(0).getChild(0));
+                tbName = DDLUtils.getFQName((ASTNode) 
child.getChild(0).getChild(1));
+              }
+              ASTNode partitionSpecNode = (ASTNode) child.getChild(1);
+              partitionSpec = getValidatedPartSpec(getTable(dbName, tbName, 
true), partitionSpecNode, conf, false);
+            }
+            break;
+          case HiveParser.TOK_COMPACT_POOL:
+            poolName = unescapeSQLString(child.getChild(0).getText());
+            break;
+          case HiveParser.TOK_COMPACTION_TYPE:
+            compactionType = unescapeSQLString(child.getChild(0).getText());
+            break;
+          case HiveParser.TOK_COMPACTION_STATUS:
+            compactionStatus = unescapeSQLString(child.getChild(0).getText());
+            break;
+          case HiveParser.TOK_COMPACT_ID:
+           compactionId = Long.parseLong(child.getChild(0).getText());
+           break;
+          default:
+            dbName = child.getText();

Review Comment:
   case HiveParser.TOK_TABTYPE should handle both simple and DB prefixed 
tables, is this really required?



##########
standalone-metastore/metastore-common/src/main/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java:
##########
@@ -4362,4 +4353,6 @@ ReplicationMetricList 
getReplicationMetrics(GetReplicationMetricsRequest
    * @throws TException
    */
   List<WriteEventInfo> getAllWriteEventInfo(GetAllWriteEventInfoRequest 
request) throws TException;
+  ShowCompactResponse showCompactions(ShowCompactRequest request) throws 
TException;

Review Comment:
   I think this should be next to `ShowCompactResponse showCompactions() throws 
TException;` at line 3501



##########
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java:
##########
@@ -3916,56 +3879,59 @@ public ShowCompactResponse 
showCompact(ShowCompactRequest rqst) throws MetaExcep
       try (Connection dbConn = 
getDbConn(Connection.TRANSACTION_READ_COMMITTED);
         PreparedStatement stmt = 
sqlGenerator.prepareStmtWithParameters(dbConn, query.toString(),
           getShowCompactionQueryParamList(rqst))) {
-          LOG.debug("Going to execute query <" + query + ">");
-          try (ResultSet rs = stmt.executeQuery()) {
-            while (rs.next()) {
-              ShowCompactResponseElement e = new ShowCompactResponseElement();
-              e.setDbname(rs.getString(1));
-              e.setTablename(rs.getString(2));
-              e.setPartitionname(rs.getString(3));
-              e.setState(compactorStateToResponse(rs.getString(4).charAt(0)));
-              try {
-                
e.setType(dbCompactionType2ThriftType(rs.getString(5).charAt(0)));
-              } catch (MetaException ex) {
-                //do nothing to handle RU/D if we add another status
-              }
-              e.setWorkerid(rs.getString(6));
-              long start = rs.getLong(7);
-              if (!rs.wasNull()) {
-                e.setStart(start);
-              }
-              long endTime = rs.getLong(8);
-              if (endTime != -1) {
-                e.setEndTime(endTime);
-              }
-              e.setRunAs(rs.getString(9));
-              e.setHadoopJobId(rs.getString(10));
-              e.setId(rs.getLong(11));
-              e.setErrorMessage(rs.getString(12));
-              long enqueueTime = rs.getLong(13);
-              if (!rs.wasNull()) {
-                e.setEnqueueTime(enqueueTime);
-              }
-              e.setWorkerVersion(rs.getString(14));
-              e.setInitiatorId(rs.getString(15));
-              e.setInitiatorVersion(rs.getString(16));
-              long cleanerStart = rs.getLong(17);
-              if (!rs.wasNull() && (cleanerStart != -1)) {
-                e.setCleanerStart(cleanerStart);
-              }
-              String poolName = rs.getString(18);
-              if (isBlank(poolName)) {
-                e.setPoolName(DEFAULT_POOL_NAME);
-              } else {
-                e.setPoolName(poolName);
-              }
-              e.setTxnId(rs.getLong(19));
-              e.setNextTxnId(rs.getLong(20));
-              e.setCommitTime(rs.getLong(21));
-              e.setHightestTxnId(rs.getLong(22));
-              response.addToCompacts(e);
+        if (rqst.isSetId()) {
+          stmt.setLong(getShowCompactionQueryParamList(rqst).size() + 1, 
rqst.getId());

Review Comment:
   Maybe introducing a QueryBuilder, like for QueryBasedCompaction? All the 
showCompact related query parts, `getShowCompactionQueryParamList(rqst)` and 
`getShowCompactionFilterClause(rqst)`, could be extracted from TxnHandler. 
However, not sure if it worth it. At least the result of 
`getShowCompactionFilterClause(rqst)` should be stored into a variable to avoid 
calling it twice.



##########
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnStore.java:
##########
@@ -104,6 +104,33 @@ enum MUTEX_KEY {
   String DID_NOT_INITIATE_RESPONSE = "did not initiate";
   String REFUSED_RESPONSE = "refused";
 
+  static final char INITIATED_STATE = 'i';
+  static final char WORKING_STATE = 'w';
+  static final char READY_FOR_CLEANING = 'r';
+  static final char FAILED_STATE = 'f';
+  static final char SUCCEEDED_STATE = 's';
+  static final char DID_NOT_INITIATE = 'a';
+  static final char REFUSED_STATE = 'c';
+
+  // Compactor types
+  static final char MAJOR_TYPE = 'a';
+  static final char MINOR_TYPE = 'i';
+
+
+  static final String COMPACTOR_MAJOR_TYPE = "MAJOR";
+  static final String COMPACTOR_MINOR_TYPE = "MINOR";
+
+  static final String TXN_TMP_STATE = "_";
+
+  static final String DEFAULT_POOL_NAME = "default";
+
+
+  // Lock states
+  static final char LOCK_ACQUIRED = 'a';
+  static final  char LOCK_WAITING = 'w';
+
+  static final int ALLOWED_REPEATED_DEADLOCKS = 10;

Review Comment:
   Unnecessary static final





Issue Time Tracking
-------------------

    Worklog Id:     (was: 819580)
    Time Spent: 10h 50m  (was: 10h 40m)

> SHOW COMPACTIONS should support filtering options
> -------------------------------------------------
>
>                 Key: HIVE-13353
>                 URL: https://issues.apache.org/jira/browse/HIVE-13353
>             Project: Hive
>          Issue Type: Improvement
>          Components: Transactions
>    Affects Versions: 1.3.0, 4.0.0
>            Reporter: Eugene Koifman
>            Assignee: KIRTI RUGE
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 4.0.0
>
>         Attachments: HIVE-13353.01.patch
>
>          Time Spent: 10h 50m
>  Remaining Estimate: 0h
>
> Since we now have historical information in SHOW COMPACTIONS the output can 
> easily become unwieldy. (e.g. 1000 partitions with 3 lines of history each)
> this is a significant usability issue
> Need to add ability to filter by db/table/partition
> Perhaps would also be useful to filter by status



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to