[ https://issues.apache.org/jira/browse/HIVE-23956?focusedWorklogId=465618&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-465618 ]
ASF GitHub Bot logged work on HIVE-23956: ----------------------------------------- Author: ASF GitHub Bot Created on: 03/Aug/20 09:54 Start Date: 03/Aug/20 09:54 Worklog Time Spent: 10m Work Description: pvary commented on a change in pull request #1339: URL: https://github.com/apache/hive/pull/1339#discussion_r464312615 ########## File path: ql/src/test/org/apache/hadoop/hive/ql/TestTxnCommands.java ########## @@ -618,7 +618,13 @@ public void testMultipleInserts() throws Exception { dumpTableData(Table.ACIDTBL, 1, 1); List<String> rs1 = runStatementOnDriver("select a,b from " + Table.ACIDTBL + " order by a,b"); Assert.assertEquals("Content didn't match after commit rs1", allData, rs1); + runStatementOnDriver("delete from " + Table.ACIDTBL + " where b = 2"); Review comment: This is a valid test, but I think the testMultipleInserts test for inserts, and this is test for deletes. Maybe create its' own test method named testDeleteOfInserts like testUpdateOfInserts? ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking ------------------- Worklog Id: (was: 465618) Time Spent: 3h 50m (was: 3h 40m) > Delete delta directory file information should be pushed to execution side > -------------------------------------------------------------------------- > > Key: HIVE-23956 > URL: https://issues.apache.org/jira/browse/HIVE-23956 > Project: Hive > Issue Type: Improvement > Reporter: Peter Varga > Assignee: Peter Varga > Priority: Major > Labels: pull-request-available > Time Spent: 3h 50m > Remaining Estimate: 0h > > Since HIVE-23840 LLAP cache is used to retrieve the tail of the ORC bucket > files in the delete deltas, but to use the cache the fileId must be > determined, so one more FileSystem call is issued for each bucket. > This fileId is already available during compilation in the AcidState > calculation, we should serialise this to the OrcSplit, and remove the > unnecessary FS calls. > Furthermore instead of sending the SyntheticFileId directly, we should pass > the attemptId instead of the standard path hash, this way the path and the > SyntheticFileId. can be calculated, and it will work even, if the move free > delete operations will be introduced. -- This message was sent by Atlassian Jira (v8.3.4#803005)