[
https://issues.apache.org/jira/browse/HIVE-26102?focusedWorklogId=753277&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-753277
]
ASF GitHub Bot logged work on HIVE-26102:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 06/Apr/22 09:28
Start Date: 06/Apr/22 09:28
Worklog Time Spent: 10m
Work Description: marton-bod commented on code in PR #3131:
URL: https://github.com/apache/hive/pull/3131#discussion_r843714944
##########
iceberg/iceberg-handler/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergOutputCommitter.java:
##########
@@ -118,18 +120,23 @@ public void commitTask(TaskAttemptContext
originalContext) throws IOException {
.run(output -> {
Table table =
HiveIcebergStorageHandler.table(context.getJobConf(), output);
if (table != null) {
- HiveIcebergRecordWriter writer = writers.get(output);
- DataFile[] closedFiles;
+ HiveIcebergWriter writer = writers.get(output);
+ HiveIcebergWriter delWriter = delWriters.get(output);
+ String fileForCommitLocation =
generateFileForCommitLocation(table.location(), jobConf,
+ attemptID.getJobID(), attemptID.getTaskID().getId());
+ if (delWriter != null) {
+ DeleteFile[] closedFiles = delWriter.deleteFiles().toArray(new
DeleteFile[0]);
+ createFileForCommit(closedFiles, fileForCommitLocation,
table.io());
Review Comment:
> the S3 files is where we will spend some serious time
Makes sense.
As discussed, let's create a container object which we can
serialize/deserialize
Issue Time Tracking
-------------------
Worklog Id: (was: 753277)
Time Spent: 6h 20m (was: 6h 10m)
> Implement DELETE statements for Iceberg tables
> ----------------------------------------------
>
> Key: HIVE-26102
> URL: https://issues.apache.org/jira/browse/HIVE-26102
> Project: Hive
> Issue Type: New Feature
> Reporter: Marton Bod
> Assignee: Marton Bod
> Priority: Major
> Labels: pull-request-available
> Time Spent: 6h 20m
> Remaining Estimate: 0h
>
--
This message was sent by Atlassian Jira
(v8.20.1#820001)