pvary commented on a change in pull request #1095:
URL: https://github.com/apache/hive/pull/1095#discussion_r442738994



##########
File path: ql/src/java/org/apache/hadoop/hive/ql/ValidTxnManager.java
##########
@@ -183,8 +183,21 @@ ValidTxnWriteIdList recordValidWriteIds() throws 
LockException {
   }
 
   private ValidTxnWriteIdList getTxnWriteIds(String txnString) throws 
LockException {
-    List<String> txnTables = getTransactionalTables(getTables(true, true));
-    ValidTxnWriteIdList txnWriteIds = null;
+
+  List<String> txnTables = getTransactionalTables(getTables(true, true));
+  ValidTxnWriteIdList txnWriteIds = null;
+
+   // If we have collected all required table writeid (in SemanticAnalyzer), 
skip fetch again
+   if 
(driverContext.getConf().get(ValidTxnWriteIdList.VALID_TABLES_WRITEIDS_KEY) != 
null) {
+      txnWriteIds = new 
ValidTxnWriteIdList(driverContext.getConf().get(ValidTxnWriteIdList.VALID_TABLES_WRITEIDS_KEY));
+      for (String txnTable : txnTables) {
+        if (txnWriteIds.getTableValidWriteIdList(txnTable) == null) {
+          txnWriteIds = null;
+          break;
+        }
+      }
+    }

Review comment:
       If we keep it this place, I would move it inside the else for 
getCompactionWriteIds, since we do not have DDLs in compaction anyway...




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to